Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
5.0 - 10.0 years
20 - 35 Lacs
Kochi, Bengaluru
Work from Office
Job Summary: We are seeking a highly skilled and motivated Machine Learning Engineer with a strong foundation in programming and machine learning, hands-on experience with AWS Machine Learning services (especially SageMaker), and a solid understanding of Data Engineering and MLOps practices. You will be responsible for designing, developing, deploying, and maintaining scalable ML solutions in a cloud-native environment. Key Responsibilities: • Design and implement machine learning models and pipelines using AWS SageMaker and related services. • Develop and maintain robust data pipelines for training and inference workflows. • Collaborate with data scientists, engineers, and product teams to translate business requirements into ML solutions. • Implement MLOps best practices including CI/CD for ML, model versioning, monitoring, and retraining strategies. • Optimize model performance and ensure scalability and reliability in production environments. • Monitor deployed models for drift, performance degradation, and anomalies. • Document processes, architectures, and workflows for reproducibility and compliance. Required Skills & Qualifications: • Strong programming skills in Python and familiarity with ML libraries (e.g., scikitlearn, TensorFlow, PyTorch). • Solid understanding of machine learning algorithms, model evaluation, and tuning. • Hands-on experience with AWS ML services, especially SageMaker, S3, Lambda, Step Functions, and CloudWatch. • Experience with data engineering tools (e.g., Apache Airflow, Spark, Glue) and workflow orchestration. Machine Learning Engineer - Job Description • Proficiency in MLOps tools and practices (e.g., MLflow, Kubeflow, CI/CD pipelines, Docker, Kubernetes). • Familiarity with monitoring tools and logging frameworks for ML systems. • Excellent problem-solving and communication skills. Preferred Qualifications: • AWS Certification (e.g., AWS Certified Machine Learning Specialty). • Experience with real-time inference and streaming data. • Knowledge of data governance, security, and compliance in ML systems
Posted 1 month ago
5.0 - 8.0 years
8 - 18 Lacs
Hyderabad
Work from Office
We are looking for a highly skilled Data Platform Monitoring & Workflow Engineer to support our enterprise-level SAS-to-cloud migration program. In this role, you will be responsible for automating workflows, establishing monitoring systems , and building code compliance standards for data processing pipelines across SAS Viya and Snowflake platforms. Preferred candidate profile Experience with SAS tools or environments (training provided) Familiarity with Docker/Kubernetes Knowledge of cloud platforms (AWS or Azure) Unix/Linux system administration Experience building or customizing code compliance tools
Posted 1 month ago
12.0 - 16.0 years
0 Lacs
pune, maharashtra
On-site
As a key contributor at Avalara, you will be at the forefront of designing the canvas UI, shaping the DSL, and refining the workflow orchestration. Your creativity and passion for technology will be the driving force behind an integration revolution. You will be responsible for designing and implementing new features while maintaining existing functionalities. Writing optimized, testable, scalable, and production-ready code will be a crucial part of your role. Additionally, you will be accountable for writing microservices and APIs, participating in design discussions, building POC, and contributing to delivering high-quality products, features, and frameworks. Your involvement will span across all phases of the development lifecycle, including planning, design, implementation, testing, deployment, and support. You will be required to take necessary corrective measures to address problems, anticipate problem areas in new designs, and focus on optimization, performance, security, observability, scalability, and telemetry. Following agile/scrum processes and rituals, along with adhering to set coding standards, guidelines, and best practices, will be essential. Collaboration with teams and stakeholders to understand requirements and implement the best technical solutions that are simple and intuitive is a key aspect of the role. Furthermore, providing technical guidance and mentorship to junior engineers, fostering a culture of continuous learning and professional growth within the team, will be part of your responsibilities. Qualifications: - Bachelor/masters degree in computer science or equivalent. - 12+ years of full stack experience in a software development role, shipping complex applications to large-scale production environments. - Expertise in C# or Java programming language. - Knowledge of architectural styles and design patterns to solve complex problems with simple intuitive design. - Experience in architecting, building, and deploying (CI/CD) highly scalable distributed systems and frameworks for small businesses and enterprises. - Experience working in an Agile team with hands-on experience with TDD, BDD. - Experience converting monoliths to microservices or serverless architecture. - Experience with monitoring and alerting tools and analyzing system metrics to determine root cause analysis. - Curiosity to know more about how things work and always looking to improve code quality and development processes. - Excellent analytical and troubleshooting skills to solve complex problems and critical production issues. - Passion for delivering the best product in the business. - Excellent communication skills to collaborate with both technical and non-technical stakeholders. - Proven record of accomplishment of delivering high-quality software projects on time. About Avalara: Avalara is defining the relationship between tax and tech, with an industry-leading cloud compliance platform processing nearly 40 billion customer API calls and over 5 million tax returns a year. As a billion-dollar business, Avalara is continuously growing and expanding its tribe to achieve its mission of being part of every transaction globally. With a culture that empowers its people to win, Avalara instills passion and trust in its employees, fostering ownership and achievement. Join the Avalara team and experience a career that is as bright, innovative, and disruptive as the orange they love to wear.,
Posted 1 month ago
3.0 - 5.0 years
5 - 7 Lacs
Mumbai, New Delhi, Bengaluru
Work from Office
We are seeking a skilled Big Data Developer with 3+ years of experience to develop, maintain, and optimize large-scale data pipelines using frameworks like Spark, PySpark, and Airflow. The role involves working with SQL, Impala, Hive, and PL/SQL for advanced data transformations and analytics, designing scalable data storage systems, and integrating structured and unstructured data using tools like Sqoop. The ideal candidate will collaborate with cross-functional teams to implement data warehousing strategies and leverage BI tools for insights. Proficiency in Python programming, workflow orchestration with Airflow, and Unix/Linux environments is essential. Locations : Mumbai, Delhi / NCR, Bengaluru , Kolkata, Chennai, Hyderabad, Ahmedabad, Pune, Remote
Posted 1 month ago
3.0 - 5.0 years
5 - 9 Lacs
New Delhi, Ahmedabad, Bengaluru
Work from Office
We are seeking a skilled Big Data Developer with 3+ years of experience to develop, maintain, and optimize large-scale data pipelines using frameworks like Spark, PySpark, and Airflow. The role involves working with SQL, Impala, Hive, and PL/SQL for advanced data transformations and analytics, designing scalable data storage systems, and integrating structured and unstructured data using tools like Sqoop. The ideal candidate will collaborate with cross-functional teams to implement data warehousing strategies and leverage BI tools for insights. Proficiency in Python programming, workflow orchestration with Airflow, and Unix/Linux environments is essential. Location: Remote- Delhi / NCR,Bangalore/Bengaluru, Hyderabad/Secunderabad,Chennai, Pune,Kolkata,Ahmedabad,Mumbai
Posted 1 month ago
5.0 - 10.0 years
12 - 20 Lacs
Noida, Gurugram, Delhi / NCR
Hybrid
Responsibilities :- Build and manage data infrastructure on AWS , including S3, Glue, Lambda, Open Search, Athena, and CloudWatch using IaaC tool like Terraform Design and implement scalable ETL pipelines with integrated validation and monitoring. Set up data quality frameworks using tools like Great Expectations , integrated with PostgreSQL or AWS Glue jobs. Implement automated validation checks at key points in the data flow: post-ingest, post-transform, and pre-load. Build centralized logging and alerting pipelines (e.g., using CloudWatch Logs, Fluent bit ,SNS, File bit ,Logstash , or third-party tools). Define CI/CD processes for deploying and testing data pipelines (e.g., using Jenkins, GitHub Actions) Collaborate with developers and data engineers to enforce schema versioning, rollback strategies, and data contract enforcement. Preferred candidate profile 5+ years of experience in DataOps, DevOps, or data infrastructure roles. Proven experience with infrastructure-as-code (e.g., Terraform, CloudFormation). Proven experience with real-time data streaming platforms (e.g., Kinesis, Kafka). Proven experience building production-grade data pipelines and monitoring systems in AWS . Hands-on experience with tools like AWS Glue , S3 , Lambda , Athena , and CloudWatch . Strong knowledge of Python and scripting for automation and orchestration. Familiarity with data validation frameworks such as Great Expectations, Deequ, or dbt tests. Experience with SQL-based data systems (e.g., PostgreSQL). Understanding of security, IAM, and compliance best practices in cloud data environments.
Posted 1 month ago
4.0 - 7.0 years
3 - 7 Lacs
Hyderabad
Work from Office
Medical Coding Certified Fresher Certifications (CPC/CPC-A/CCS/ COC) About R1RCM R1 is a leading provider of technology-driven solutions that help hospitals and health systems to manage their financial systems and improve patients experience. We are the one company that combines the deep expertise of a global workforce of revenue cycle professionals with the industry's most advanced technology platform, encompassing sophisticated analytics, Al, intelligent automation and workflow orchestration. R1 is a place where we think boldly to create opportunities for everyone to innovate and grow. A place where we partner with purpose through transparency and inclusion. We are a global community of engineers, front-line associates, healthcare operators, and RCM experts that work together to go beyond for all those we serve. Because we know that all this adds up to something more, a place where we're all together better. R1 India is proud to be recognized amongst Top 25 Best Companies to Work For 2024, by the Great Place to Work Institute. This is our second consecutive recognition on this prestigious Best Workplaces list, building on the Top 50 recognition we achieved in 2023. Our focus on employee wellbeing and inclusion and diversity is demonstrated through prestigious recognitions with R1 India being ranked amongst Best in Healthcare, recognized as one of Indias Top 50 Best Workplaces for Women 2024, amongst Indias Top 25 Best Workplaces in Diversity, Equity, Inclusion & Belonging 2024, Top 100 Best Companies for Women by Avtar & Seramount, and amongst Top 10 Best Workplaces in Health & Wellness. We are committed to transform the healthcare industry with our innovative revenue cycle management services. Our goal is to make healthcare work better for all by enabling efficiency for healthcare systems, hospitals, and physician practices. With over 30,000 employees globally, we are about 17,000+ strong in India with presence in Delhi NCR, Hyderabad, Bengaluru, and Chennai. Our inclusive culture ensures that every employee feels valued, respected, and appreciated with a robust set of employee benefits and engagement activities. Responsibilities:Assign codes to diagnoses and procedures, using ICD (International Classification of Diseases) and CPT (Current Procedural Terminology) codes.Ensure codes are accurate and sequenced correctly in accordance with government and insurance regulations.Follow up with the provider on any documentation that is insufficient or unclear.Communicate with other clinical staff regarding documentation.Search for information in cases where the coding is complex or unusual.Receive and review patient charts and documents for accuracy.Review the previous day's batch of patient notes for evaluation and coding.Ensure that all codes are current and active.:Education Any Graduate.Successful completion of a certification program from AHIMA or AAPC.Strong knowledge of anatomy, physiology, and medical terminology.Familiarity with ICD-10 & CPT codes and procedures.Solid oral and written communication skills.Able to work independently. Working in an evolving healthcare setting, we use our shared expertise to deliver innovative solutions. Our fast-growing team has opportunities to learn and grow through rewarding interactions, collaboration and the freedom to explore professional interests. Our associates are given valuable opportunities to contribute, to innovate and create meaningful work that makes an impact in the communities we serve around the world. We also offer a culture of excellence that drives customer success and improves patient care. We believe in giving back to the community and offer a competitive benefits package. To learn more, visitr1rcm.com Visit us on Facebook
Posted 1 month ago
8.0 - 13.0 years
5 - 9 Lacs
Hyderabad
Work from Office
7+ years of experience in back-end development with Java/Spring Boot. 4+ years of hands-on experience with Camunda BPM for workflow and decision modeling. Expertise in BPMN 2.0, DMN, and workflow orchestration. Experience with Camunda Optimize and Tasklist. Strong experience with REST API integration and microservice architecture. Solid understanding of software development lifecycle (SDLC) and agile methodologies. Excellent problem-solving skills and ability to debug complex workflow scenarios. Strong communication and interpersonal skills Design and implement business workflows and decision models using Camunda BPMN and DMN. Integrate Camunda with enterprise systems using Java/Spring Boot and REST APIs Develop scalable, maintainable, and high-performance process applications.
Posted 1 month ago
8.0 - 13.0 years
9 - 14 Lacs
Bengaluru
Work from Office
10+ years of experience in back-end development with Java/Spring Boot. 4+ years of hands-on experience with Camunda BPM for workflow and decision modeling. Expertise in BPMN 2.0, DMN, and workflow orchestration. Experience with Camunda Optimize and Tasklist. Strong experience with REST API integration and microservice architecture. Solid understanding of software development lifecycle (SDLC) and agile methodologies. Excellent problem-solving skills and ability to debug complex workflow scenarios. Strong communication and interpersonal skills Design and implement business workflows and decision models using Camunda BPMN and DMN. Integrate Camunda with enterprise systems using Java/Spring Boot and REST APIs Develop scalable, maintainable, and high-performance process applications.
Posted 1 month ago
3.0 - 8.0 years
5 - 10 Lacs
Bengaluru
Work from Office
Capgemini Invent Capgemini Invent is the digital innovation, consulting and transformation brand of the Capgemini Group, a global business line that combines market leading expertise in strategy, technology, data science and creative design, to help CxOs envision and build what’s next for their businesses. Your Role Solution Design & Architecture Implementation & Deployment Technical Leadership & Guidance Client Engagement & Collaboration Performance Monitoring & Optimization Your Profile Bachelor's or Master's degree in Computer Science, Engineering, or a related field. 3-8 years of experience in designing, implementing, and managing data solutions. 3-8 years of hands-on experience working with Google Cloud Platform (GCP) data services. Strong expertise in core GCP data services, including BigQuery (Data Warehousing) Cloud Storage (Data Lake) Dataflow (ETL/ELT) Cloud Composer (Workflow Orchestration - Apache Airflow) Pub/Sub and Dataflow (Streaming Data) Cloud Data Fusion (Graphical Data Integration) Dataproc (Managed Hadoop and Spark) Proficiency in SQL and experience with data modeling techniques. Experience with at least one programming language (e.g., Python, Java, Scala). Experience with Infrastructure-as-Code (IaC) tools such as Terraform or Cloud Deployment Manager. Understanding of data governance, security, and compliance principles in a cloud environment. Experience with CI/CD pipelines and DevOps practices. Excellent problem-solving, analytical, and communication skills. Ability to work independently and as part of a collaborative team. What you will love about working here We recognize the significance of flexible work arrangements to provide support. Be it remote work, or flexible work hours, you will get an environment to maintain healthy work life balance. At the heart of our mission is your career growth. Our array of career growth programs and diverse professions are crafted to support you in exploring a world of opportunities. Equip yourself with valuable certifications in the latest technologies such as Generative AI. About Capgemini Capgemini is a global business and technology transformation partner, helping organizations to accelerate their dual transition to a digital and sustainable world, while creating tangible impact for enterprises and society. It is a responsible and diverse group of 340,000 team members in more than 50 countries. With its strong over 55-year heritage, Capgemini is trusted by its clients to unlock the value of technology to address the entire breadth of their business needs. It delivers end-to-end services and solutions leveraging strengths from strategy and design to engineering, all fueled by its market leading capabilities in AI, cloud and data, combined with its deep industry expertise and partner ecosystem. The Group reported 2023 global revenues of "22.5 billion.
Posted 1 month ago
4.0 - 6.0 years
4 - 8 Lacs
Kolkata, Hyderabad, Pune
Work from Office
AWS Engineer1 TitleAWS Engineer Location: Offshore (Remote) Job Overview: We are seeking a skilled AWS Engineer to support the buildout and scaling of CodePulse , an internal engineering metrics platform. This is a contract position for 36 months , focused on implementing AWS infrastructure for high-throughput metric ingestion and analytics. The ideal candidate has a strong track record of building scalable, event-driven systems using core AWS services such as ECS Fargate , SQS , API Gateway , RDS , S3 , and Elasticache . Experience working on similar data processing or pipeline-based platforms is highly desirable. Key Responsibilities: Design, implement, and maintain secure, auto-scaling AWS infrastructure for a container-based microservice application. Deploy ECS (Fargate) workloads that process messages from SQS queues and write results to RDS and S3. Set up and tune CloudWatch alarms, logs, and metrics for system observability and alerting. Configure and optimize infrastructure componentsSQS, SNS, API Gateway, RDS (PostgreSQL), ElastiCache (Redis), and S3. Support integration with GitHub and Jira by securely handling API credentials, tokens, and webhook flows. Write and manage infrastructure-as-code using Terraform or AWS CDK, with support for versioning and team hand-off. Work with internal engineers to troubleshoot issues, optimize performance, and manage deployment workflows. Qualifications: 4-6 years of hands-on experience working as an AWS DevOps or Cloud Engineer. Proven experience deploying and scaling services using ECS Fargate , SQS , API Gateway , RDS (Postgres) , and S3 . Experience with Redis caching using Elasticache and familiarity with tuning cache strategies. Strong experience with CloudWatch , including logs, alarms, and dashboard setup. Proficient with Terraform or AWS CDK for infrastructure automation. Strong understanding of VPCs, IAM roles and policies, TLS, and secure communication patterns. Demonstrated experience building or supporting event-driven microservices in a production setting. Ability to work independently in a remote, distributed team and communicate clearly. Preferred Qualifications: Experience building internal tools or platforms with metric processing , workflow orchestration , or CI/CD integration . Familiarity with GitHub Actions , Docker , and container image deployment via ECR. Experience optimizing AWS infrastructure for cost efficiency and auto-scaling under burst loads . Prior experience integrating with third-party APIs like GitHub, Jira, or ServiceNow (optional but a plus). Location - Pune,Hyderabad,Kolkata,Jaipur,Chandigarh
Posted 2 months ago
10.0 - 20.0 years
20 - 35 Lacs
Bengaluru
Hybrid
Role & responsibilities Install & Upgrade AVEVA MES & System Platform solutions Design, configure, and deploy AVEVA MES solutions (Operations, Performance, and Quality modules) Develop and maintain MES workflows, data models, and reports for real-time production monitoring Integrate MES with SCADA, PLCs, and ERP systems using OPC, REST APIs, and SQL databases Implement traceability, work order execution, and production tracking in MES Develop custom scripts (SQL, .NET, JavaScript, XML, JSON) for MES automation IDOCs xml mapping with MES database and custom database Design operator and Supervisor interfaces for MES using Aveva Skelta BPM, System Platform APIs and other framework solutions Provide technical support and troubleshooting for MES applications Work with cross-functional teams to optimize manufacturing processes and ensure system reliability Conduct user training and documentation for MES system usage and best practices Ensure MES system security, compliance, and performance optimization Preferred candidate profile Bachelor's in Computer Science, Industrial Automation, or related field. 58 years in MES implementation, preferably AVEVA MES. Experience in Food & Beverage or Life Sciences. Team management & customer-facing experience. Proficient in AVEVA System Platform, Enterprise Integrator, and Skelta Workflow. Strong knowledge of ISA-95 MES architecture & manufacturing operations. SQL expertise (SQL Server, Oracle) & experience with .NET, JavaScript, XML, JSON. Knowledge of OPC UA/DA, industrial protocols & ERP-MES integration (SAP or similar).
Posted 2 months ago
4.0 - 9.0 years
8 - 12 Lacs
Pune
Work from Office
Amdocs helps those who build the future to make it amazing. With our market-leading portfolio of software products and services, we unlock our customers innovative potential, empowering them to provide next-generation communication and media experiences for both the individual end user and enterprise customers. Our employees around the globe are here to accelerate service providers migration to the cloud, enable them to differentiate in the 5G era, and digitalize and automate their operations. Listed on the NASDAQ Global Select Market, Amdocs had revenue of $5.00 billion in fiscal 2024. For more information, visit www.amdocs.com In one sentence We are seeking a Data Engineer with advanced expertise in Databricks SQL, PySpark, Spark SQL, and workflow orchestration using Airflow. The successful candidate will lead critical projects, including migrating SQL Server Stored Procedures to Databricks Notebooks, designing incremental data pipelines, and orchestrating workflows in Azure Databricks What will your job look like Migrate SQL Server Stored Procedures to Databricks Notebooks, leveraging PySpark and Spark SQL for complex transformations. Design, build, and maintain incremental data load pipelines to handle dynamic updates from various sources, ensuring scalability and efficiency. Develop robust data ingestion pipelines to load data into the Databricks Bronze layer from relational databases, APIs, and file systems. Implement incremental data transformation workflows to update silver and gold layer datasets in near real-time, adhering to Delta Lake best practices. Integrate Airflow with Databricks to orchestrate end-to-end workflows, including dependency management, error handling, and scheduling. Understand business and technical requirements, translating them into scalable Databricks solutions. Optimize Spark jobs and queries for performance, scalability, and cost-efficiency in a distributed environment. Implement robust data quality checks, monitoring solutions, and governance frameworks within Databricks. Collaborate with team members on Databricks best practices, reusable solutions, and incremental loading strategies All you need is... Bachelor s degree in computer science, Information Systems, or a related discipline. 4+ years of hands-on experience with Databricks, including expertise in Databricks SQL, PySpark, and Spark SQL. Proven experience in incremental data loading techniques into Databricks, leveraging Delta Lake's features (e.g., time travel, MERGE INTO). Strong understanding of data warehousing concepts, including data partitioning, and indexing for efficient querying. Proficiency in T-SQL and experience in migrating SQL Server Stored Procedures to Databricks. Solid knowledge of Azure Cloud Services, particularly Azure Databricks and Azure Data Lake Storage. Expertise in Airflow integration for workflow orchestration, including designing and managing DAGs. Familiarity with version control systems (e.g., Git) and CI/CD pipelines for data engineering workflows. Excellent analytical and problem-solving skills with a focus on detail-oriented development. Preferred Qualifications Advanced knowledge of Delta Lake optimizations, such as compaction, Z-ordering, and vacuuming. Experience with real-time streaming data pipelines using tools like Kafka or Azure Event Hubs. Familiarity with advanced Airflow features, such as SLA monitoring and external task dependencies. Certifications such as Databricks Certified Associate Developer for Apache Spark or equivalent. Experience in Agile development methodologie Why you will love this job: You will be able to use your specific insights to lead business change on a large scale and drive transformation within our organization. You will be a key member of a global, dynamic and highly collaborative team with various possibilities for personal and professional development. You will have the opportunity to work in multinational environment for the global market leader in its field! We offer a wide range of stellar benefits including health, dental, vision, and life insurance as well as paid time off, sick time, and parental leave!
Posted 2 months ago
4.0 - 6.0 years
6 - 8 Lacs
Hyderabad
Work from Office
What you will do In this vital role We are looking for highly motivated expert Senior Data Engineer who can own the design & development of complex data pipelines, solutions and frameworks. The ideal candidate will be responsible to design, develop, and optimize data pipelines, data integration frameworks, and metadata-driven architectures that enable seamless data access and analytics. This role prefers deep expertise in big data processing, distributed computing, data modeling, and governance frameworks to support self-service analytics, AI-driven insights, and enterprise-wide data management. Roles & Responsibilities: Design, develop, and maintain scalable ETL/ELT pipelines to support structured, semi-structured, and unstructured data processing across the Enterprise Data Fabric. Implement real-time and batch data processing solutions, integrating data from multiple sources into a unified, governed data fabric architecture. Optimize big data processing frameworks using Apache Spark, Hadoop, or similar distributed computing technologies to ensure high availability and cost efficiency. Work with metadata management and data lineage tracking tools to enable enterprise-wide data discovery and governance. Ensure data security, compliance, and role-based access control (RBAC) across data environments. Optimize query performance, indexing strategies, partitioning, and caching for large-scale data sets. Develop CI/CD pipelines for automated data pipeline deployments, version control, and monitoring. Implement data virtualization techniques to provide seamless access to data across multiple storage systems. Collaborate with cross-functional teams, including data architects, business analysts, and DevOps teams, to align data engineering strategies with enterprise goals. Stay up to date with emerging data technologies and best practices, ensuring continuous improvement of Enterprise Data Fabric architecture. What we expect of you We are all different, yet we all use our unique contributions to serve patients. Basic Qualifications: Masters degree and 4to 6 years of Computer Science, IT or related field experience OR Bachelors degree and 6 to 8 years of Computer Science, IT or related field experience AWS Certified Data Engineer preferred Databricks Certificate preferred Scaled Agile SAFe certification preferred Preferred Qualifications: Must-Have Skills: Hands-on experience in data engineering technologies such as Databricks, PySpark, SparkSQL Apache Spark, AWS, Python, SQL, and Scaled Agile methodologies. Proficiency in workflow orchestration, performance tuning on big data processing. Strong understanding of AWS services Experience with Data Fabric, Data Mesh, or similar enterprise-wide data architectures. Ability to quickly learn, adapt and apply new technologies Strong problem-solving and analytical skills Excellent communication and collaboration skills Experience with Scaled Agile Framework (SAFe), Agile delivery practices, and DevOps practices. Good-to-Have Skills: Good to have deep expertise in Biotech & Pharma industries Experience in writing APIs to make the data available to the consumers Experienced with SQL/NOSQL database, vector database for large language models Experienced with data modeling and performance tuning for both OLAP and OLTP databases Experienced with software engineering best-practices, including but not limited to version control (Git, Subversion, etc.), CI/CD (Jenkins, Maven etc.), automated unit testing, and Dev Ops. Soft Skills: Excellent analytical and troubleshooting skills. Strong verbal and written communication skills Ability to work effectively with global, virtual teams High degree of initiative and self-motivation. Ability to manage multiple priorities successfully. Team-oriented, with a focus on achieving team goals. Ability to learn quickly, be organized and detail oriented. Strong presentation and public speaking skills.
Posted 2 months ago
3.0 - 6.0 years
10 - 14 Lacs
Bengaluru
Work from Office
Extensive experience on Composable Infrastructure such as Liqid or GigaIO Advanced experience with Cisco UCS-X, UCS-C Series infrastructure, UCS Central and Cisco Intersight. Advanced experience with Dell R Series servers and Dell OME Advanced knowledge of Workflow Orchestration platforms, and Container technologies including Kubernetes and Docker. Proficient with BASH, Python, Javascript, Ansible, Terraform, Redfish or similar scripting language. Advanced knowledge in robot framework to automate QA testing and robotic processes. Primary Skills Cisco Certified Network Professional DC (CCNP) Linux Foundation Certified Engineer (LFCE) VMWare Certified Professional (VCP-DCV)
Posted 2 months ago
1.0 - 4.0 years
2 - 5 Lacs
Hyderabad
Work from Office
ABOUT AMGEN Amgen harnesses the best of biology and technology to fight the world’s toughest diseases, and make people’s lives easier, fuller and longer. We discover, develop, manufacture and deliver innovative medicines to help millions of patients. Amgen helped establish the biotechnology industry more than 40 years ago and remains on the cutting-edge of innovation, using technology and human genetic data to push beyond what’s known today. ABOUT THE ROLE Role Description: Let’s do this. Let’s change the world. We are looking for highly motivated expert Data Engineer who can own the design, development & maintenance of complex data pipelines, solutions and frameworks. The ideal candidate will be responsible to design, develop, and optimize data pipelines, data integration frameworks, and metadata-driven architectures that enable seamless data access and analytics. This role prefers deep expertise in big data processing, distributed computing, data modeling, and governance frameworks to support self-service analytics, AI-driven insights, and enterprise-wide data management. Roles & Responsibilities: Design, develop, and maintain complex ETL/ELT data pipelines in Databricks using PySpark, Scala, and SQL to process large-scale datasets Understand the biotech/pharma or related domains & build highly efficient data pipelines to migrate and deploy complex data across systems Design and Implement solutions to enable unified data access, governance, and interoperability across hybrid cloud environments Ingest and transform structured and unstructured data from databases (PostgreSQL, MySQL, SQL Server, MongoDB etc.), APIs, logs, event streams, images, pdf, and third-party platforms Ensuring data integrity, accuracy, and consistency through rigorous quality checks and monitoring Expert in data quality, data validation and verification frameworks Innovate, explore and implement new tools and technologies to enhance efficient data processing Proactively identify and implement opportunities to automate tasks and develop reusable frameworks Work in an Agile and Scaled Agile (SAFe) environment, collaborating with cross-functional teams, product owners, and Scrum Masters to deliver incremental value Use JIRA, Confluence, and Agile DevOps tools to manage sprints, backlogs, and user stories. Support continuous improvement, test automation, and DevOps practices in the data engineering lifecycle Collaborate and communicate effectively with the product teams, with cross-functional teams to understand business requirements and translate them into technical solutions Must-Have Skills: Hands-on experience in data engineering technologies such as Databricks, PySpark, SparkSQL Apache Spark, AWS, Python, SQL, and Scaled Agile methodologies. Proficiency in workflow orchestration, performance tuning on big data processing. Strong understanding of AWS services Ability to quickly learn, adapt and apply new technologies Strong problem-solving and analytical skills Excellent communication and teamwork skills Experience with Scaled Agile Framework (SAFe), Agile delivery practices, and DevOps practices. Good-to-Have Skills: Data Engineering experience in Biotechnology or pharma industry Experience in writing APIs to make the data available to the consumers Experienced with SQL/NOSQL database, vector database for large language models Experienced with data modeling and performance tuning for both OLAP and OLTP databases Experienced with software engineering best-practices, including but not limited to version control (Git, Subversion, etc.), CI/CD (Jenkins, Maven etc.), automated unit testing, and Dev Ops Education and Professional Certifications Minimum 5 to 8 years of Computer Science, IT or related field experience AWS Certified Data Engineer preferred Databricks Certificate preferred Scaled Agile SAFe certification preferred Soft Skills: Excellent analytical and troubleshooting skills. Strong verbal and written communication skills Ability to work effectively with global, virtual teams High degree of initiative and self-motivation. Ability to manage multiple priorities successfully. Team-oriented, with a focus on achieving team goals. Ability to learn quickly, be organized and detail oriented. Strong presentation and public speaking skills. EQUAL OPPORTUNITY STATEMENT Amgen is an Equal Opportunity employer and will consider you without regard to your race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, or disability status. We will ensure that individuals with disabilities are provided with reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request an accommodation.
Posted 2 months ago
15.0 - 20.0 years
10 - 14 Lacs
Bengaluru
Work from Office
Project Role : Application Lead Project Role Description : Lead the effort to design, build and configure applications, acting as the primary point of contact. Must have skills : Apache Airflow Good to have skills : NAMinimum 7.5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As an Application Lead, you will lead the effort to design, build, and configure applications, acting as the primary point of contact. Your typical day will involve collaborating with various teams to ensure that application development aligns with business objectives, overseeing project timelines, and facilitating communication among stakeholders to drive project success. You will also engage in problem-solving activities, ensuring that the applications meet the required standards and functionality while adapting to any changes in project scope or requirements. Roles & Responsibilities:- Expected to be an SME.- Collaborate and manage the team to perform.- Responsible for team decisions.- Engage with multiple teams and contribute on key decisions.- Provide solutions to problems for their immediate team and across multiple teams.- Facilitate training and knowledge sharing sessions to enhance team capabilities.- Monitor project progress and implement necessary adjustments to meet deadlines. Professional & Technical Skills: - Must To Have Skills: Proficiency in Apache Airflow.- Good To Have Skills: Experience with cloud platforms such as AWS or Azure.- Strong understanding of workflow orchestration and scheduling.- Experience with data pipeline development and management.- Familiarity with containerization technologies like Docker. Additional Information:- The candidate should have minimum 7.5 years of experience in Apache Airflow.- This position is based at our Bengaluru office.- A 15 years full time education is required. Qualification 15 years full time education
Posted 2 months ago
3.0 - 7.0 years
12 - 17 Lacs
Bengaluru
Work from Office
Job Summary: Synechron is seeking an experienced Senior Data Engineer with expertise in AWS, Apache Airflow, and DBT to design and implement scalable, reliable data pipelines. The role involves collaborating with data teams and business stakeholders to develop data solutions that enable actionable insights and support organizational decision-making. The ideal candidate will bring data engineering experience, demonstrating strong technical skills, strategic thinking, and the ability to work in a fast-paced, evolving environment. Software Requirements: Required: Strong proficiency in AWS services including S3, Redshift, Lambda, and Glue, with proven hands-on experience Expertise in Apache Airflow for workflow orchestration and pipeline management Extensive experience with DBT for data transformation and modeling Solid knowledge of SQL for data querying and manipulation Preferred: Familiarity with Hadoop, Spark, or other big data technologies Experience with NoSQL databases (e.g., DynamoDB, Cassandra) Knowledge of data governance and security best practices within cloud environments Overall Responsibilities: Lead the design, development, and maintenance of scalable and efficient data pipelines and workflows utilizing AWS, Airflow, and DBT Collaborate with data scientists, analysts, and business teams to gather requirements and translate them into technical solutions Optimize Extract, Transform, Load (ETL) processes to enhance data quality, integrity, and timeliness Monitor pipeline performance, troubleshoot issues, and implement improvements to ensure operational excellence Enforce data management, governance, and security protocols across all data flows Mentor junior data engineers and promote best practices within the team Stay current with emerging data technologies and industry trends, recommending innovations for the data ecosystem Technical Skills (By Category): Programming Languages: Essential: SQL, Python (preferred for scripting and automation) Preferred: Spark, Scala, Java (for big data integration) Databases/Data Management: Extensive experience with data warehousing (Redshift, Snowflake, or similar) and relational databases (MySQL, PostgreSQL) Familiarity with NoSQL databases such as DynamoDB or Cassandra is a plus Cloud Technologies: AWS cloud platform, leveraging services like S3, Lambda, Glue, Redshift, and IAM security features Frameworks and Libraries: Apache Airflow, dbt, and related data orchestration and transformation tools Development Tools and Methodologies: Git, Jenkins, CI/CD pipelines, Agile/Scrum environment experience Security Protocols: Knowledge of data encryption, access control, and compliance standards in cloud data engineering Experience Requirements: At least 8 years of professional experience in data engineering or related roles with a focus on cloud ecosystems and big data pipelines Demonstrated experience designing and managing end-to-end data workflows in AWS environments Proven success in collaborating with cross-functional teams and translating business requirements into technical solutions Prior experience mentoring junior engineers and leading data projects is highly desirable Day-to-Day Activities: Develop, deploy, and monitor scalable data pipelines using AWS, Airflow, and DBT Collaborate regularly with data scientists, analysts, and business stakeholders to refine data requirements and deliver impactful solutions Troubleshoot production data pipeline issues to resolve data quality or performance bottlenecks Conduct code reviews, optimize existing workflows, and implement automation to improve efficiency Document data architecture, pipelines, and governance practices for knowledge sharing and compliance Keep abreast of emerging data tools and industry best practices, proposing enhancements to existing systems Qualifications: Bachelor’s degree in Computer Science, Data Science, Engineering, or related field; Master’s degree preferred Professional certifications such as AWS Certified Data Analytics – Specialty or related credentials are advantageous Commitment to continuous professional development and staying current with industry trends Professional Competencies: Strong analytical, problem-solving, and critical thinking skills Excellent communication abilities to effectively liaise with technical and business teams Proven leadership in mentoring team members and managing project deliverables Ability to work independently, prioritize tasks, and adapt to changing business needs Innovative mindset focused on scalable, efficient, and sustainable data solutions
Posted 2 months ago
6 - 8 years
2 - 5 Lacs
Hyderabad
Work from Office
ABOUT AMGEN Amgen harnesses the best of biology and technology to fight the world’s toughest diseases, and make people’s lives easier, fuller and longer. We discover, develop, manufacture and deliver innovative medicines to help millions of patients. Amgen helped establish the biotechnology industry more than 40 years ago and remains on the cutting-edge of innovation, using technology and human genetic data to push beyond what’s known today. ABOUT THE ROLE Role Description: Let’s do this. Let’s change the world. We are looking for highly motivated expert Data Engineer who can own the design & development of complex data pipelines, solutions and frameworks. The ideal candidate will be responsible to design, develop, and optimize data pipelines, data integration frameworks, and metadata-driven architectures that enable seamless data access and analytics. This role prefers deep expertise in big data processing, distributed computing, data modeling, and governance frameworks to support self-service analytics, AI-driven insights, and enterprise-wide data management. Roles & Responsibilities: Design, develop, and maintain complex ETL/ELT data pipelines in Databricks using PySpark, Scala, and SQL to process large-scale datasets Understand the biotech/pharma or related domains & build highly efficient data pipelines to migrate and deploy complex data across systems Design and Implement solutions to enable unified data access, governance, and interoperability across hybrid cloud environments Ingest and transform structured and unstructured data from databases (PostgreSQL, MySQL, SQL Server, MongoDB etc.), APIs, logs, event streams, images, pdf, and third-party platforms Ensuring data integrity, accuracy, and consistency through rigorous quality checks and monitoring Expert in data quality, data validation and verification frameworks Innovate, explore and implement new tools and technologies to enhance efficient data processing Proactively identify and implement opportunities to automate tasks and develop reusable frameworks Work in an Agile and Scaled Agile (SAFe) environment, collaborating with cross-functional teams, product owners, and Scrum Masters to deliver incremental value Use JIRA, Confluence, and Agile DevOps tools to manage sprints, backlogs, and user stories. Support continuous improvement, test automation, and DevOps practices in the data engineering lifecycle Collaborate and communicate effectively with the product teams, with cross-functional teams to understand business requirements and translate them into technical solutions Must-Have Skills: Hands-on experience in data engineering technologies such as Databricks, PySpark, SparkSQL Apache Spark, AWS, Python, SQL, and Scaled Agile methodologies. Proficiency in workflow orchestration, performance tuning on big data processing. Strong understanding of AWS services Ability to quickly learn, adapt and apply new technologies Strong problem-solving and analytical skills Excellent communication and teamwork skills Experience with Scaled Agile Framework (SAFe), Agile delivery practices, and DevOps practices. Good-to-Have Skills: Data Engineering experience in Biotechnology or pharma industry Experience in writing APIs to make the data available to the consumers Experienced with SQL/NOSQL database, vector database for large language models Experienced with data modeling and performance tuning for both OLAP and OLTP databases Experienced with software engineering best-practices, including but not limited to version control (Git, Subversion, etc.), CI/CD (Jenkins, Maven etc.), automated unit testing, and Dev Ops Education and Professional Certifications Any Degree and 6-8 years of experience AWS Certified Data Engineer preferred Databricks Certificate preferred Scaled Agile SAFe certification preferred Soft Skills: Excellent analytical and troubleshooting skills. Strong verbal and written communication skills Ability to work effectively with global, virtual teams High degree of initiative and self-motivation. Ability to manage multiple priorities successfully. Team-oriented, with a focus on achieving team goals. Ability to learn quickly, be organized and detail oriented. Strong presentation and public speaking skills. EQUAL OPPORTUNITY STATEMENT Amgen is an Equal Opportunity employer and will consider you without regard to your race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, or disability status. We will ensure that individuals with disabilities are provided with reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request an accommodation.
Posted 3 months ago
1 - 4 years
2 - 5 Lacs
Hyderabad
Work from Office
ABOUT AMGEN Amgen harnesses the best of biology and technology to fight the world’s toughest diseases, and make people’s lives easier, fuller and longer. We discover, develop, manufacture and deliver innovative medicines to help millions of patients. Amgen helped establish the biotechnology industry more than 40 years ago and remains on the cutting-edge of innovation, using technology and human genetic data to push beyond what’s known today. ABOUT THE ROLE Role Description: Let’s do this. Let’s change the world. We are looking for highly motivated expert Data Engineer who can own the design & development of complex data pipelines, solutions and frameworks. The ideal candidate will be responsible to design, develop, and maintain data pipelines, data integration frameworks, and metadata-driven architectures that enable seamless data access and analytics. This role prefers deep expertise in big data processing, distributed computing, data modeling, and governance frameworks to support self-service analytics, AI-driven insights, and enterprise-wide data management. Roles & Responsibilities: Design, develop, and maintain complex ETL/ELT data pipelines in Databricks using PySpark, Scala, and SQL to process large-scale datasets Understand the biotech/pharma or related domains & build highly efficient data pipelines to migrate and deploy complex data across systems Design and Implement solutions to enable unified data access, governance, and interoperability across hybrid cloud environments Ingest and transform structured and unstructured data from databases (PostgreSQL, MySQL, SQL Server, MongoDB etc.), APIs, logs, event streams, images, pdf, and third-party platforms Ensuring data integrity, accuracy, and consistency through rigorous quality checks and monitoring Expert in data quality, data validation and verification frameworks Innovate, explore and implement new tools and technologies to enhance efficient data processing Proactively identify and implement opportunities to automate tasks and develop reusable frameworks Work in an Agile and Scaled Agile (SAFe) environment, collaborating with cross-functional teams, product owners, and Scrum Masters to deliver incremental value Use JIRA, Confluence, and Agile DevOps tools to manage sprints, backlogs, and user stories. Support continuous improvement, test automation, and DevOps practices in the data engineering lifecycle Collaborate and communicate effectively with the product teams, with cross-functional teams to understand business requirements and translate them into technical solutions Must-Have Skills: Hands-on experience in data engineering technologies such as Databricks, PySpark, SparkSQL Apache Spark, AWS, Python, SQL, and Scaled Agile methodologies. Proficiency in workflow orchestration, performance tuning on big data processing. Strong understanding of AWS services Ability to quickly learn, adapt and apply new technologies Strong problem-solving and analytical skills Excellent communication and teamwork skills Experience with Scaled Agile Framework (SAFe), Agile delivery practices, and DevOps practices. Good-to-Have Skills: Data Engineering experience in Biotechnology or pharma industry Experience in writing APIs to make the data available to the consumers Experienced with SQL/NOSQL database, vector database for large language models Experienced with data modeling and performance tuning for both OLAP and OLTP databases Experienced with software engineering best-practices, including but not limited to version control (Git, Subversion, etc.), CI/CD (Jenkins, Maven etc.), automated unit testing, and Dev Ops Education and Professional Certifications Minimum 5 to 8 years of Computer Science, IT or related field experience AWS Certified Data Engineer preferred Databricks Certificate preferred Scaled Agile SAFe certification preferred Soft Skills: Excellent analytical and troubleshooting skills. Strong verbal and written communication skills Ability to work effectively with global, virtual teams High degree of initiative and self-motivation. Ability to manage multiple priorities successfully. Team-oriented, with a focus on achieving team goals. Ability to learn quickly, be organized and detail oriented. Strong presentation and public speaking skills. EQUAL OPPORTUNITY STATEMENT Amgen is an Equal Opportunity employer and will consider you without regard to your race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, or disability status. We will ensure that individuals with disabilities are provided with reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request an accommodation.
Posted 3 months ago
1.0 - 5.0 years
15 - 25 Lacs
bengaluru
Work from Office
Key Responsibilities Synthetic Data Generation & Quality Assurance Design and implement scalable synthetic data generation systems to support model training Develop and maintain data quality validation pipelines ensuring synthetic data meets training requirements Build automated testing frameworks for synthetic data generation workflows Collaborate with ML teams to optimize synthetic data for model performance APIs & Integration Develop and maintain REST API integrations across multiple enterprise platforms Implement robust data exchange, transformation, and synchronisation logic between systems Ensure error handling, retries, and monitoring for all integration workflows Data Quality & Testing Implement automated data validation and testing frameworks for ETL and synthetic data workflows Translate data quality feedback from stakeholders into pipeline or generation process improvements Proactively monitor and maintain data consistency across systems Multi-System Integration & MCP Development Build and maintain tool registries for Model Control Protocol (MCP) integration across multiple enterprise systems Develop robust APIs for multi-system communication through MCP frameworks Design and implement workflows that coordinate multi-system interactions Ensure reliable data flow and error handling across distributed system architectures Cross-Functional Collaboration & Production Integration Partner with domain specialists to translate plan execution feedback into actionable insights Work closely with Product Managers to align synthetic data generation with business requirements Collaborate with Core Engineering teams to ensure seamless production deployment Establish feedback mechanisms between synthetic data systems and production environments Required Qualifications Technical Skills Programming: Proficiency in Python, Typescript (optional) Data Engineering: Experience in data engineering frameworks and libraries (Pandas, Apache Airflow, Prefect) APIs & Integration: Strong background in REST APIs and system integration Databases: Experience with relational and NoSQL databases (PostgreSQL, MongoDB) Cloud Platforms: Hands on experience with AWS/GCP/Azure Experience Requirements 2+ years experience in building production-scale data pipelines and orchestration systems Demonstrated success in cross-functional collaboration in technical environments Preferred Qualifications Familiarity with managing Kubernetes-based production workloads and workflow orchestration (Argo) Familiarity with containerisation and orchestration with tools like Docker, Kubernetes etc. Familiarity with synthetic or large-scale data generation Background in enterprise software integration Experience with Model Control Protocol (MCP) or similar orchestration frameworks Knowledge of automated testing frameworks for data pipelines What We Offer Lots of learning many systems are being built from the ground up, with no existing references or open-source projects to rely on. This will be the first time not just for you, but for the industry as well. Opportunity to work at the forefront of enterprise-scale synthetic data generation Collaborative environment with product teams, engineering, and domain specialists Competitive compensation and comprehensive benefits Professional development opportunities in cutting-edge data engineering and Kubernetes orchestration Team Structure You'll report to the AI Engineering Lead and work closely with: ML Engineers developing foundation models Product Managers defining business requirements Product Specialists providing domain expertise Backend Engineers handling production infrastructure This role offers significant impact on our data capabilities and the opportunity to shape how we generate and utilize synthetic data for training enterprise systems.
Posted Date not available
6.0 - 10.0 years
3 - 7 Lacs
bengaluru
Work from Office
Were looking for an experienced Senior Data Engineer to lead the design and development of scalable data solutions at our company. The ideal candidate will have extensive hands-on experience in data warehousing, ETL/ELT architecture, and cloud platforms like AWS, Azure, or GCP. You will work closely with both technical and business teams, mentoring ngineers while driving data quality, security, and performance optimization. Responsibilities: Lead the design of data warehouses, lakes, and ETL workflows. Collaborate with teams to gather requirements and build scalable solutions. Ensure data governance, security, and optimal performance of systems. Mentor junior engineers and drive end-to-end project delivery. Requirements: 6+ years of experience in data engineering, including at least 2 full-cycle data warehouse projects. Strong skills in SQL, ETL tools (e.g., Pentaho, dbt), and cloud platforms. Expertise in big data tools (e.g., Apache Spark, Kafka). Excellent communication skills and leadership abilities. Preferred: Experience with workflow orchestration tools (e.g., Airflow), real-time data, and DataOps practices.
Posted Date not available
15.0 - 20.0 years
13 - 17 Lacs
hyderabad, chennai
Work from Office
Job Overview We are looking for a talented, enthusiastic Microsoft 365 Engineering Manager who will be responsible for overseeing the implementation, management, and optimization of Microsoft 365 services and Operations within an organization. The Manager works closely with the IT team, the business stakeholders, and the Microsoft support team to ensure a smooth and successful migration process. Responsible for designing, implementing, and maintaining Microsoft Office 365 solutions for an organization. They are responsible for guiding the technical direction, ensuring the quality and reliability of the services, and fostering a high-performing team environment. Key Responsibilities Overseeing the design, development, and deployment of Microsoft 365 services, ensuring high quality, security, and performance. Designing and implementing Office 365 solutions, including Exchange Online, SharePoint Online, OneDrive for Business, and Microsoft Teams. Developing and implementing strategic roadmaps for Microsoft 365 adoption and usage. Provide technical support and guidance to end users and administrators on Microsoft 365 features and functionalities. Collaborating with other IT teams to integrate Office 365 with other systems and applications. Analyzing the overall enterprise environment to find gaps and can think outside-of-the-box to design and create functionality which will prove to be of value. Collaborating with various teams and departments to align Microsoft 365 services with business goals. Configure and deploy the Microsoft 365 services and features, such as Exchange Online, SharePoint Online, OneDrive for Business, Teams, and Intune. Migrate the Identity, data, and applications from one tenant to another Microsoft 365 Tenant using various tools and methods, such as Cross-Teant mailbox / SharePoint, and third-party solutions. Perform testing and validation to ensure the functionality and integrity of the migrated data and applications. Document the migration process, procedures, best practices, and outcomes and ensuring knowledge transfer and efficient collaboration within the team. Develop and maintain PowerShell scripts to automate migration related task and workflows, including provisioning, configuration, and troubleshooting, with a focus on enhancing efficiency and user productivity. Staying up to date with industry trends, best practices, and emerging technologies in the desktop management space, identifying opportunities for improvement and innovation. Required Skills and Qualifications Bachelor's degree in computer science, engineering, or related field, or equivalent work experience. A minimum of 15 years of IT experience is required, including at least 7 years working with Microsoft 365 suites, Microsoft Directory Services, and data migration projects. Strong knowledge of Microsoft 365 services and features, as well as the migration tools and methods. Strong written and verbal communication and collaboration skills, with the ability to work effectively with cross-functional teams, stakeholders, and executive levels. Understanding of Identity and Access Management (IAM) and Privileged Identity Management concepts Experience with Azure AD roles and Conditional Access policies, MFA, and related administration skills Ability to manage multiple tasks, projects, and deadlines effectively and help team members with your expertise to achieve common goals. Proficient in PowerShell scripting and Exchange online, SharePoint Online and Entra ID administration. Certification in Microsoft 365 Fundamentals or Microsoft 365 Administrator is preferred. Solid understanding of Intune MDM/MAM, On-Prem Active Directly, Group Policies concepts. Knowledge of Defender for Office 365, Microsoft Security and Compliance and best practices. Strong analytical and critical thinking skills, with the ability to analyze directory and M365 services logs and derive meaningful insights. Initiative-taking mindset with a keen sense of ownership and the ability to work independently to drive the initiative forward.
Posted Date not available
14.0 - 20.0 years
18 - 25 Lacs
chennai
Hybrid
Senior Program Manager - RPA Automation - 14+ Years - Chennai An exciting opportunity to lead digital transformation initiatives at a strategic level. Drive innovation and automation across business processes while collaborating with cross-functional teams in a leadership role. Location - Chennai Your Future Employer A high-growth organization known for delivering cutting-edge analytics and data engineering solutions. A people-first environment focused on innovation, collaboration, and continuous learning. Responsibilities 1. Strategize, implement, and oversee transformation initiatives aligned with client and organizational objectives. 2. Lead large digital programs involving automation, RPA, AI/GenAI, CX tools, and process orchestration. 3. Collaborate with software engineers, analysts, and business stakeholders to improve end-to-end operations. 4. Drive regular project meetings and ensure closure of all program milestones. 5. Manage quality control, stakeholder communication, UAT, and post-deployment hypercare support. Requirements 1. 5+ years of leadership experience in program management roles, with 14+ years overall. 2. Strong exposure to digital transformation in telecom order management and front-office processes. 3. Expertise in tools like Genesys, SFDC, ServiceNow, RPA, and other CX/AI platforms. 4. Excellent communication, stakeholder management, and organizational skills. 5. PMP certification and knowledge of emerging tech (AI/GenAI) will be an added advantage. 6. Experience from Finance, Accounting, Contact Centre (BPO) domain. What is in it for you Leadership role in a global technology organization Exposure to cutting-edge digital transformation technologies Opportunity to work on high-impact programs across industries Reach us If you think this role is aligned with your career, kindly write me an email along with your updated CV on aayushi.goyal@crescendogroup.in for a confidential discussion on the role. Disclaimer Crescendo Global specializes in Senior to C-level niche recruitment. We are passionate about empowering job seekers and employers with a memorable hiring experience. We do not discriminate on any basis and never charge candidates. Note – Due to a high volume of applications, we may not be able to respond to every applicant individually. If you do not hear from us within a week, kindly assume that your profile has not been shortlisted. Scammers can misuse Crescendo Global’s name for fake job offers. We never ask for money, purchases, or system upgrades. Verify all opportunities at www.crescendo-global.com and report fraud immediately. Stay alert! Profile Keywords – Jobs in Chennai, Jobs for Program Manager, Digital Transformation Jobs, Senior Program Manager Jobs, Telecom Transformation, RPA Jobs, AI/GenAI Program Manager, CX Tools, Stakeholder Management, PMP Program Manager, Automation Leadership, RPA Automation
Posted Date not available
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
54024 Jobs | Dublin
Wipro
24262 Jobs | Bengaluru
Accenture in India
18733 Jobs | Dublin 2
EY
17079 Jobs | London
Uplers
12548 Jobs | Ahmedabad
IBM
11704 Jobs | Armonk
Amazon
11059 Jobs | Seattle,WA
Bajaj Finserv
10656 Jobs |
Accenture services Pvt Ltd
10587 Jobs |
Oracle
10506 Jobs | Redwood City