Jobs
Interviews

17074 Tuning Jobs - Page 10

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

5.0 - 9.0 years

4 - 8 Lacs

No locations specified

On-site

Join Amgen’s Mission of Serving Patients At Amgen, if you feel like you’re part of something bigger, it’s because you are. Our shared mission—to serve patients living with serious illnesses—drives all that we do. Since 1980, we’ve helped pioneer the world of biotech in our fight against the world’s toughest diseases. With our focus on four therapeutic areas –Oncology, Inflammation, General Medicine, and Rare Disease– we reach millions of patients each year. As a member of the Amgen team, you’ll help make a lasting impact on the lives of patients as we research, manufacture, and deliver innovative medicines to help people live longer, fuller happier lives. Our award-winning culture is collaborative, innovative, and science based. If you have a passion for challenges and the opportunities that lay within them, you’ll thrive as part of the Amgen team. Join us and transform the lives of patients while transforming your career. What you will do As a Sr. Associate IS Security Engineer at Amgen, you will play a critical role in ensuring the security and protection of the company's information systems and data. You will implement security measures, conduct security audits, analyze security incidents, and provide recommendations for improvements. Your strong knowledge of security protocols, network infrastructure, and vulnerability assessment will contribute to maintaining a secure IT environment. Roles & Responsibilities: Apply patches, perform OS upgrades, manage platform end-of-life. Perform annual audits and periodic compliance reviews. Support GxP validation and documentation processes. Monitor and respond to security incidents. Correlate alerts across platforms for threat detection. Improve procedures through post-incident analysis. What we expect of you We are all different, yet we all use our unique contributions to serve patients. Basic Qualifications: Master's degree / Bachelor's degree and 5 to 9years of Computer Science, IT or related field experience. Must-Have Skills: Hands on experience with big data technologies and platforms, such as Databricks, Apache Spark (PySpark, SparkSQL),Snowflake, workflow orchestration, performance tuning on big data processiSolid understanding of security technologies and their core functionality Experience in analyzing cybersecurity threats with up-to-date knowledge of attack vectors and the cyber threat landscape. Ability to prioritize tasks effectively and solve problems efficiently in a diverse, global team environment. Good knowledge of Windows and/or Linux systems. Experience with security alert correlation across different platforms. Experience with ServiceNow, especially CMDB, Common Service Data Model (CSDM) and IT Service Management. SQL & Database Knowledge – Experience working with relational databases, querying data, and optimizing datasets. Preferred Qualifications: Familiarity with Cloud services like AWS (e.g., Redshift, S3, EC2, IAM ), Databricks (Deltalake, Unity catalog, token etc) Understanding of Agile methodologies (Scrum, SAFe) Knowledge of DevOps, CI/CD practices Familiarity with scientific or healthcare data domains Soft Skills: Excellent critical-thinking and problem-solving skills Strong communication and collaboration skills Demonstrated awareness of how to function in a team setting Demonstrated presentation skills What you can expect of us As we work to develop treatments that take care of others, we also work to care for your professional and personal growth and well-being. From our competitive benefits to our collaborative culture, we’ll support your journey every step of the way. In addition to the base salary, Amgen offers competitive and comprehensive Total Rewards Plans that are aligned with local industry standards. Apply now and make a lasting impact with the Amgen team. careers.amgen.com As an organization dedicated to improving the quality of life for people around the world, Amgen fosters an inclusive environment of diverse, ethical, committed and highly accomplished people who respect each other and live the Amgen values to continue advancing science to serve patients. Together, we compete in the fight against serious disease. Amgen is an Equal Opportunity employer and will consider all qualified applicants for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, disability status, or any other basis protected by applicable law. We will ensure that individuals with disabilities are provided reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request accommodation.

Posted 1 day ago

Apply

5.0 years

1 - 10 Lacs

Hyderābād

On-site

JOB DESCRIPTION We have an opportunity to impact your career and provide an adventure where you can push the limits of what's possible. As a Lead Software Engineer at JPMorgan Chase within the Consumer & Community Banking Technical Team, you are an integral part of an agile team that works to enhance, build, and deliver trusted market-leading technology products in a secure, stable, and scalable way. Drive significant business impact through your capabilities and contributions, and apply deep technical expertise and problem-solving methodologies to tackle a diverse array of challenges that span multiple technologies and applications. Job responsibilities Executes creative software solutions, design, development, and technical troubleshooting with ability to think beyond routine or conventional approaches to build solutions or break down technical problems Develops secure high-quality production code, and reviews and debugs code written by others Identifies opportunities to eliminate or automate remediation of recurring issues to improve overall operational stability of software applications and systems Leads evaluation sessions with external vendors, startups, and internal teams to drive outcomes-oriented probing of architectural designs, technical credentials, and applicability for use within existing systems and information architecture Leads communities of practice across Software Engineering to drive awareness and use of new and leading-edge technologies Adds to team culture of diversity, equity, inclusion, and respect Required qualifications, capabilities, and skills Formal training or certification on software engineering concepts and 5+ years of applied experience Demonstrated and strong hands on Python/Java Enterprise Web Development; developing in all tiers (middleware, integration and database) of the application and proven experience with design patterns Experience in design and Architecture Experience in AWS (EKS, EC2, S3,EventBridge,StepFunction, SNS/SQS,Lambda) is must Experience in Design and develop scalable, high-performance applications using AWS-native event-driven services, including API Gateway Experience in AWS cloud monitoring tools like Datadog, Cloud watch, Lambda is needed Deep hands-on experience in Django, Flask & Object Oriented methodology of design and development Experience with databases like Amazon RDS, caching and performance tuning, REST APIs, with Messaging (Kafka) Hands on with development and test automation tools/frameworks (e.g. BDD and Cucumber) Experience in best practices for Data Pipeline design, Data architecture and processing of structured and unstructured data. Ability to plan, prioritize and follow through on their work and meet deadlines in a fast-paced environment, while also clearly articulating both technical and non-technical issues with stake holders & partners like Dev Ops, Architects, QA testers & Product Owners Preferred qualifications, capabilities, and skills Experience in Micro services Experience in financial domain is preferred Exposure to artificial intelligence, machine learning, mobile Exposure to agile methodologies such as CI/CD, Applicant Resiliency, and Security Hands-on practical experience in system design, application development, testing, and operational stability ABOUT US

Posted 1 day ago

Apply

5.0 - 9.0 years

7 - 8 Lacs

Hyderābād

On-site

Join Amgen’s Mission of Serving Patients At Amgen, if you feel like you’re part of something bigger, it’s because you are. Our shared mission—to serve patients living with serious illnesses—drives all that we do. Since 1980, we’ve helped pioneer the world of biotech in our fight against the world’s toughest diseases. With our focus on four therapeutic areas –Oncology, Inflammation, General Medicine, and Rare Disease– we reach millions of patients each year. As a member of the Amgen team, you’ll help make a lasting impact on the lives of patients as we research, manufacture, and deliver innovative medicines to help people live longer, fuller happier lives. Our award-winning culture is collaborative, innovative, and science based. If you have a passion for challenges and the opportunities that lay within them, you’ll thrive as part of the Amgen team. Join us and transform the lives of patients while transforming your career. What you will do As a Data Engineer, you will be responsible for designing, building, maintaining, analyzing, and interpreting data to provide actionable insights that drive business decisions. This role involves working with large datasets, developing reports, supporting and executing data initiatives and, visualizing data to ensure data is accessible, reliable, and efficiently managed. The ideal candidate has strong technical skills, experience with big data technologies, and a deep understanding of data architecture and ETL processes Roles & Responsibilities: Design, develop, and maintain data solutions for data generation, collection, and processing Be a key team member that assists in design and development of the data pipeline Create data pipelines and ensure data quality by implementing ETL processes to migrate and deploy data across systems Contribute to the design, development, and implementation of data pipelines, ETL/ELT processes, and data integration solutions Develop and maintain data models, data dictionaries, and other documentation to ensure data accuracy and consistency Implement data security and privacy measures to protect sensitive data Leverage cloud platforms (AWS preferred) to build scalable and efficient data solutions Collaborate and communicate effectively with product teams Collaborate with Data Architects, Business SMEs, and Data Scientists to design and develop end-to-end data pipelines to meet fast paced business needs across geographic regions Identify and resolve complex data-related challenges Adhere to best practices for coding, testing, and designing reusable code/component Explore new tools and technologies that will help to improve ETL platform performance Participate in sprint planning meetings and provide estimations on technical implementation What we expect of you We are all different, yet we all use our unique contributions to serve patients. Basic Qualifications: Master's degree / Bachelor's degree and 5 to 9 years of Computer Science, IT or related field experience. Must-Have Skills: Hands on experience with big data technologies and platforms, such as Databricks, Apache Spark (PySpark, SparkSQL),Snowflake, workflow orchestration, performance tuning on big data processing Proficiency in data analysis tools (eg. SQL) Proficient in SQL for extracting, transforming, and analyzing complex datasets from relational data stores Experience with ETL tools such as Apache Spark, and various Python packages related to data processing, machine learning model development Strong understanding of data modeling, data warehousing, and data integration concepts Proven ability to optimize query performance on big data platforms Preferred Qualifications: Experience with Software engineering best-practices, including but not limited to version control, infrastructure-as-code, CI/CD, and automated testing Knowledge of Python/R, Databricks, SageMaker, cloud data platforms Strong knowledge on Oracle / SQL Server, Stored Procedure, PL/SQL, Knowledge on LINUX OS Knowledge on Data visualization and analytics tools like Spotfire, PowerBI Strong understanding of data governance frameworks, tools, and best practices. Knowledge of data protection regulations and compliance requirements (e.g., GDPR, CCPA) Professional Certifications: Databricks Certificate preferred AWS Data Engineer/Architect Soft Skills: Excellent critical-thinking and problem-solving skills Strong communication and collaboration skills Demonstrated awareness of how to function in a team setting Demonstrated presentation skills What you can expect of us As we work to develop treatments that take care of others, we also work to care for your professional and personal growth and well-being. From our competitive benefits to our collaborative culture, we’ll support your journey every step of the way. In addition to the base salary, Amgen offers competitive and comprehensive Total Rewards Plans that are aligned with local industry standards. Apply now and make a lasting impact with the Amgen team. careers.amgen.com As an organization dedicated to improving the quality of life for people around the world, Amgen fosters an inclusive environment of diverse, ethical, committed and highly accomplished people who respect each other and live the Amgen values to continue advancing science to serve patients. Together, we compete in the fight against serious disease. Amgen is an Equal Opportunity employer and will consider all qualified applicants for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, disability status, or any other basis protected by applicable law. We will ensure that individuals with disabilities are provided reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request accommodation.

Posted 1 day ago

Apply

5.0 - 8.0 years

4 - 9 Lacs

Hyderābād

On-site

Job requisition ID :: 83533 Date: Jul 26, 2025 Location: Hyderabad Designation: Senior Consultant Entity: Deloitte Touche Tohmatsu India LLP ob Title: Oracle PL/SQL Senior Consultant Location Workplace: Hyderabad or BLR Job Type: Full-time Job Description: An Oracle PL/SQL Developer is responsible for developing and maintaining Oracle database applications using the PL/SQL programming language. Developing and maintaining Oracle database applications using PL/SQL. Designing and developing database schemas, stored procedures, functions, and triggers using PL/SQL. Optimizing database performance by tuning SQL queries and PL/SQL code. Developing and executing test plans to ensure the quality and accuracy of PL/SQL code. Troubleshooting and resolving issues related to PL/SQL code. Responsibilities: Designs, develops, implements, and maintains custom oracle applications written in PL/SQL and supports back-end data processing in PL/SQL Good Experience in SQL Analytical Query, Performance Tunning concepts. Knowledge about oracle architecture 9i and 18c different and ETL Process Design and Develop database objects like Stored Procedures, Functions, Packages and Triggers. Develop database for Ref Cursor, Collections, Dynamic SQL and Exception handling Able to work on Collections like PL/SQL Records, PL/SQL Tables, and Nested Tables in PL/SQL blocks and Bulk Collection. Good Experience Oracle Partitioning is a key feature Must have experience in creating scheduler jobs, Views, Materialized Views. Should have good understanding in Indexing. SQL Query tuning using different tools. Clear understanding of query plan. Perform code reviews to ensure code quality, standards compliance, and best practices. Debug and troubleshoot PL/SQL code to identify and resolve issues. Collaborate with cross-functional teams to gather and analyze requirements. Design and develop database schema, tables, and relationships. Create and optimize SQL queries and PL/SQL code for efficient data retrieval and manipulation. Ensure data integrity, security, and performance in database operation. Collaborate with application developers to design and implement database structures that meet application requirements. Optimize database performance through indexing, query optimization, and other tuning techniques. Implement and maintain database security measures, such as roles, privileges, and access controls. Monitor and troubleshoot database issues. Excellent problem solving, issue identification, analytical and technical documentation skills. Strong interpersonal skills and ability to work well in a team environment. Understand SDLC process and tools, agile program management concepts and version control. Qualifications: Bachelor’s degree in Computer Science or a related field. 5 to 8 years of proven experience in Database Development. Strong proficiency in ORACLE SQL, PL/SQL. Proven experience as a PL/SQL Developer in a senior or lead role. Strong problem-solving skills and the ability to work collaboratively in a team environment. Exposure to cloud hosting and IT domains such as CI/CD pipelines and automated testing. Excellent communication and interpersonal skills.

Posted 1 day ago

Apply

8.0 years

5 - 7 Lacs

Pune

On-site

What you’ll do: We at Eaton are innovating future products with a focus on global sustainability megatrends: Energy Transition, Electrification, and Digital Enablement. We are seeking a highly skilled and experienced Manufacturing Execution System (MES) System Architect to join our IT team and drive digital solutions within our plants. In this role, you will be responsible for the design, development, and implementation of Apriso technology. You will define MES-specific architectural standards, guiding cross-functional teams in delivering scalable, secure, and future-ready solutions, and ensuring alignment with enterprise digital manufacturing goals. You will work closely with cross-functional teams to ensure that the MES solutions align with business objectives and operational requirements. This role will be ideal for a technical Apriso expert with a focus on future state technologies. Develop comprehensive system designs and architectures for Apriso-based MES solutions, ensuring scalability, reliability, and performance Define and maintain the architectural vision for MES solutions, ensuring alignment with digital manufacturing strategy and business goals Design scalable, modular, and secure MES architectures that integrate with ERPs, PLM, and shop floor systems Establish and enforce MES-specific architectural standards, governance frameworks, and best practices Collaborate with Enterprise and Solution Architects to align MES architecture with enterprise integration strategies, respecting role boundaries Evaluate and recommend MES platforms, tools, and vendors that support performance, scalability, and innovation Provide input on MES innovation opportunities (e.g., IIoT, edge computing, AI/ML) in coordination with Enterprise Architecture/Solution Architecture teams Guide cross-functional teams through architecture reviews, solution validation, and technical decision-making Ensure MES solutions meet cybersecurity, data governance, and compliance requirements Mentor technical teams on architectural principles and integration best practices Represent MES architecture in enterprise forums and vendor engagements, supporting alignment and visibility Qualifications: Requirement: Bachelor's degree from an accredited institution Minimum 8 years of professional experience in IT or manufacturing systems, with a focus on MES architecture, integration, and digital transformation. Minimum of 3 years in Apriso MES architecture, including system configuration, customization, and integration with other enterprise systems. Eaton will not consider applicants for employment immigration sponsorship or support for this position. This means that Eaton will not support any CPT, OPT, or STEM OPT plans, F-1 to H-1B, H-1B cap registration, O-1, E-3, TN status, I-485 job portability, etc. Preferred : Minimum 5 years of experience in MES and manufacturing systems, with a focus on architecture, integration, and enterprise-scale deployments. MESA (Manufacturing Enterprise Solutions Association) certification or equivalent is highly desirable. Experience with other MES technologies and manufacturing systems. Skills: Proven experience leading MES strategy and solution design across global manufacturing environments Track record of aligning technical architecture with business goals and driving innovation in manufacturing systems for large-scale, multi-site manufacturing environments Demonstrated ability to collaborate with Enterprise and Solution Architects to align MES solutions with enterprise integration strategies Technical expertise in Apriso, with additional experience in Ignition, GE Proficy, and Siemens Opcenter Deep integration knowledge across MES, ERP, PLM, and plant control systems (e.g., SCADA, OPC, PLC, Historian) Solid grasp of architecture principles: modularity, scalability, security, and cloud-native design across system, solution, and enterprise layers Skilled in data integration, modeling, and industrial standards such as ISA-95 and OPC UA. Hands-on experience with DevSecOps practices including version control and environment management (dev/test/prod) Proficient in performance tuning and troubleshooting MES applications in production environments Familiar with cybersecurity frameworks and compliance in manufacturing contexts Demonstrates a creative approach to solving complex technical problems and implementing innovative solutions Exhibits a high level of attention to detail, ensuring accuracy and thoroughness in technical work Possesses strong leadership qualities, with the ability to inspire and motivate team members Commitment to continuous learning and professional development to stay current with industry trends and technologies Strong written and verbal communication skills, with the ability to effectively communicate technical concepts to both technical and non-technical stakeholders across different global cultures Comfortable engaging with senior leadership, cross-functional teams, and global partners to drive alignment and informed decision-making Excellent judgment, time management, and prioritization skills in dynamic, high-stakes environments Skilled in building consensus, resolving conflicts, and influencing outcomes without direct authority Demonstrated ability to mentor and guide technical teams toward architectural excellence, while fostering collaboration with Enterprise and Solution Architects

Posted 1 day ago

Apply

6.0 - 10.0 years

4 - 5 Lacs

Mumbai

On-site

About Us Zycus is a pioneer in Cognitive Procurement software and has been a trusted partner of choice for large global enterprises for two decades. Zycus has been consistently recognized by Gartner, Forrester, and other analysts for its Source to Pay integrated suite. Zycus powers its S2P software with the revolutionary Merlin AI Suite. Merlin AI takes over the tactical tasks and empowers procurement and AP officers to focus on strategic projects; offers data-driven actionable insights for quicker and smarter decisions, and its conversational AI offers a B2C type user-experience to the end-users. Zycus helps enterprises drive real savings, reduce risks, and boost compliance, and its seamless, intuitive, and easy-to-use user interface ensures high adoption and value across the organization. Start your #CognitiveProcurement journey with us, as you are #MeantforMore We Are An Equal Opportunity Employer: Zycus is committed to providing equal opportunities in employment and creating an inclusive work environment. We do not discriminate against applicants on the basis of race, color, religion, gender, sexual orientation, national origin, age, disability, or any other legally protected characteristic. All hiring decisions will be based solely on qualifications, skills, and experience relevant to the job requirements. Job Description Zycus is seeking a DevOps Manager who combines strong technical expertise with leadership abilities to scale our DevOps practices and infrastructure. You will lead a team of engineers focused on automation, system scalability, security, and CI/CD delivery — while actively exploring AI-based innovations (AIOps, LLMs) to drive predictive monitoring, auto-remediation, and intelligent alerting Key Responsibilities : DevOps & Cloud Infrastructure Design, implement, and manage secure, scalable infrastructure across AWS/Azure/GCP. Drive cost optimization, performance tuning, and disaster recovery strategies. Lead adoption of best practices across high-availability and fault-tolerant systems. ️ Containerization & Orchestration Manage containerized environments using Docker , Helm , and Kubernetes (EKS/Rancher/OCP). Ensure secure, reliable orchestration and performance monitoring at scale. Infrastructure as Code (IaC) Oversee the implementation and maintenance of IaC using Terraform , Ansible , or equivalent tools. Ensure all configurations are version-controlled and environment-consistent. CI/CD Automation Architect and continuously improve CI/CD pipelines using Jenkins, ArgoCD, Tekton, etc. Enable fast, secure, and high-quality code delivery in coordination with development and QA. ️ Scripting & Automation Guide scripting efforts in Python or Shell or Ansible or similar to automate deployment, scaling, and incident response. Identify opportunities to eliminate manual interventions. Observability & Monitoring Define and implement robust logging, monitoring, and alerting systems using Prometheus , Grafana , ELK , CloudWatch , etc. Drive an AI-driven approach to predictive analytics and anomaly detection. AI-Driven DevOps (AIOps) Explore and integrate AI/ML solutions such as LLMs and GPTs for intelligent insights, chatOps, and self-healing infrastructure. Drive POCs and deployment of tools like Moogsoft , Datadog AI , Dynatrace , etc. Team Leadership & Collaboration Lead a team of DevOps engineers; set goals, mentor, review performance, and drive continuous improvement. Collaborate cross-functionally with product, engineering, QA, and security teams to align on DevOps objectives. Job Requirement 6–10 years of hands-on DevOps/SRE experience, with at least 2 years in a leadership or managerial role. Strong cloud experience with AWS and/or Azure , including services like EC2, S3, RDS, VPC, IAM, Lambda. Expertise in Docker , Kubernetes , Helm , and related tools in production environments. Proficient in Terraform , Ansible , and CI/CD tools (Jenkins, ArgoCD, Tekton). Excellent scripting ability in Python , Shell , or similar. Strong troubleshooting, analytical thinking, and incident management. Experience managing monitoring/logging stacks (Prometheus, Grafana, ELK, CloudWatch). Exposure to LLMs , AIOps , or AI-based DevOps practices is highly desirable . Proven experience in leading projects, mentoring engineers, and stakeholder communication. Preferred Qualifications Relevant certifications: AWS Certified Solutions Architect Certified Kubernetes Administrator (CKA) Terraform Associate Knowledge of HA Proxy , Nginx , CDNs . Understanding of DevSecOps and infrastructure security. Familiarity with integrating LLMs , GPTs , and AI for infrastructure or developer tooling enhancements. Five Reasons Why You Should Join Zycus : Cloud Product Company: We are a Cloud SaaS Company and our products are created by using the latest technologies like ML and AI. Our UI is in Angular JS and we are developing our mobile apps using React. A Market Leader: Zycus is recognized by Gartner (world’s leading market research analyst) as a Leader in Procurement Software Suites. Move between Roles: We believe that change leads to growth and therefore we allow our employees to shift careers and move to different roles and functions within the organization Get a Global Exposure: You get to work and deal with our global customers. Create an Impact: Zycus gives you the environment to create an impact on the product and transform your ideas into reality. Even our junior engineers get the opportunity to work on different product features. About Us Zycus is a pioneer in Cognitive Procurement software and has been a trusted partner of choice for large global enterprises for two decades. Zycus has been consistently recognized by Gartner, Forrester, and other analysts for its Source to Pay integrated suite. Zycus powers its S2P software with the revolutionary Merlin AI Suite. Merlin AI takes over the tactical tasks and empowers procurement and AP officers to focus on strategic projects; offers data-driven actionable insights for quicker and smarter decisions, and its conversational AI offers a B2C type user-experience to the end-users. Zycus helps enterprises drive real savings, reduce risks, and boost compliance, and its seamless, intuitive, and easy-to-use user interface ensures high adoption and value across the organization. Start your #CognitiveProcurement journey with us, as you are #MeantforMore

Posted 1 day ago

Apply

5.0 years

0 Lacs

Mumbai

On-site

JOB DESCRIPTION We have an opportunity to impact your career and provide an adventure where you can push the limits of what's possible. As a Data Platform Engineering Lead at JPMorgan Chase within Asset and Wealth Management, you are an integral part of an agile team that works to enhance, build, and deliver trusted market-leading technology products in a secure, stable, and scalable way. As a core technical contributor, you are responsible for conducting critical technology solutions across multiple technical areas within various business functions in support of the firm’s business objectives. Job responsibilities Lead the design, development, and implementation of scalable data pipelines and ETL batches using Python/PySpark on AWS. Execute standard software solutions, design, development, and technical troubleshooting Use infrastructure as code to build applications to orchestrate and monitor data pipelines, create and manage on-demand compute resources on cloud programmatically, create frameworks to ingest and distribute data at scale. Manage and mentor a team of data engineers, providing guidance and support to ensure successful product delivery and support. Collaborate proactively with stakeholders, users and technology teams to understand business/technical requirements and translate them into technical solutions. Optimize and maintain data infrastructure on cloud platform, ensuring scalability, reliability, and performance. Implement data governance and best practices to ensure data quality and compliance with organizational standards. Monitor and troubleshoot application and data pipelines, identifying and resolving issues in a timely manner. Stay up-to-date with emerging technologies and industry trends to drive innovation and continuous improvement. Add to team culture of diversity, equity, inclusion, and respect. Required qualifications, capabilities, and skills Formal training or certification on software engineering concepts and 5+ years applied experience Experience in software development and data engineering, with demonstrable hands-on experience in Python and PySpark. Proven experience with cloud platforms such as AWS, Azure, or Google Cloud. Good understanding of data modeling, data architecture, ETL processes, and data warehousing concepts. Experience or good knowledge of cloud native ETL platforms like Snowflake and/or Databricks. Experience with big data technologies and services like AWS EMRs, Redshift, Lambda, S3. Proven experience with efficient Cloud DevOps practices and CI/CD tools like Jenkins/Gitlab, for data engineering platforms. Good knowledge of SQL and NoSQL databases, including performance tuning and optimization. Experience with declarative infra provisioning tools like Terraform, Ansible or CloudFormation. Strong analytical skills to troubleshoot issues and optimize data processes, working independently and collaboratively. Experience in leading and managing a team/pod of engineers, with a proven track record of successful project delivery. Preferred qualifications, capabilities, and skills Knowledge of machine learning model lifecycle, language models and cloud-native MLOps pipelines and frameworks is a plus. Familiarity with data visualization tools and data integration patterns. ABOUT US

Posted 1 day ago

Apply

3.0 years

0 Lacs

Mumbai

On-site

JOB DESCRIPTION You’re ready to gain the skills and experience needed to grow within your role and advance your career — and we have the perfect software engineering opportunity for you. As a Java/AWS Software Engineer II at JPMorganChase within the Commercial & Investment Bank Payments Technology team, you are part of an agile team that works to enhance, design, and deliver the software components of the firm’s state-of-the-art technology products in a secure, stable, and scalable way. As an emerging member of a software engineering team, you execute software solutions through the design, development, and technical troubleshooting of multiple components within a technical product, application, or system, while gaining the skills and experience needed to grow within your role. Job responsibilities Executes standard software solutions, design, development, and technical troubleshooting Writes secure and high-quality code using the syntax of at least one programming language with limited guidance Designs, develops, codes, and troubleshoots with consideration of upstream and downstream systems and technical implications Applies knowledge of tools within the Software Development Life Cycle toolchain to improve the value realized by automation Applies technical troubleshooting to break down solutions and solve technical problems of basic complexity Gathers, analyzes, and draws conclusions from large, diverse data sets to identify problems and contribute to decision-making in service of secure, stable application development Learns and applies system processes, methodologies, and skills for the development of secure, stable code and systems Adds to team culture of diversity, opportunity, inclusion, and respect Required qualifications, capabilities, and skills Formal training or certification on software engineering concepts and 3+ years applied experience Hands-on practical experience in system design, application development, testing, and operational stability Proficiency in one of Java/Python programming language. Experience in development with a sound foundation of best practices of coding standards. Extensive experience with AWS services such as EC2, lambda, IAM, RDS, DDB, Kinesis, and more. Proficiency in developing, deploying, and managing Docker containers. Well versed with application design patterns. Fair understanding to define and execute nonfunctional requirement (NFR) – performance tuning, resiliency setup, monitoring, transaction tracing etc. Excellent Analytical skills and Problem solving/investigation approach. Experience in working with event store like Kafka, AWS SQS, KDS streams. Experience on working with AWS\PCF cloud and Agile methodology.. Preferred qualifications, capabilities, and skills Experience with Terraform for infrastructure as code. Kubernetes (EKS) and ECS: Knowledge of managing container orchestration with EKS and ECS. Familiarity with basic Linux commands and shell scripting. Experience in setting up and managing CI/CD pipelines using Jenkins and Spinnaker. ABOUT US

Posted 1 day ago

Apply

3.0 years

3 - 6 Lacs

Chennai

On-site

ROLE SUMMARY At Pfizer we make medicines and vaccines that change patients' lives with a global reach of over 1.1 billion patients. Pfizer Digital is the organization charged with winning the digital race in the pharmaceutical industry. We apply our expertise in technology, innovation, and our business to support Pfizer in this mission. Our team, the GSES Team, is passionate about using software and data to improve manufacturing processes. We partner with other Pfizer teams focused on: Manufacturing throughput efficiency and increased manufacturing yield Reduction of end-to-end cycle time and increase of percent release attainment Increased quality control lab throughput and more timely closure of quality assurance investigations Increased manufacturing yield of vaccines More cost-effective network planning decisions and lowered inventory costs In the Senior Associate, Integration Engineer role, you will help implement data capabilities within the team to enable advanced, innovative, and scalable database services and data platforms. You will utilize modern Data Engineering principles and techniques to help the team better deliver value in the form of AI, analytics, business intelligence, and operational insights. You will be on a team responsible for executing on technical strategies, designing architecture, and developing solutions to enable the Digital Manufacturing organization to deliver value to our partners across Pfizer. Most of all, you’ll use your passion for data to help us deliver real value to our global network of manufacturing facilities, changing patient lives for the better! ROLE RESPONSIBILITIES The Senior Associate, Integration Engineer’s responsibilities include, but are not limited to: Maintain Database Service Catalogues Build, maintain and optimize data pipelines Support cross-functional teams with data related tasks Troubleshoot data-related issues, identify root causes, and implement solutions in a timely manner Automate builds and deployments of database environments Support development teams in database related troubleshooting and optimization Document technical specifications, data flows, system architectures and installation instructions for the provided services Collaborate with stakeholders to understand data requirements and translate them into technical solutions Participate in relevant SAFe ceremonies and meetings BASIC QUALIFICATIONS Education: Bachelor’s degree or Master’s degree in Computer Science, Data Engineering, Data Science, or related discipline Minimum 3 years of experience in Data Engineering, Data Science, Data Analytics or similar fields Broad Understanding of data engineering techniques and technologies, including at least 3 of the following: PostgreSQL (or similar SQL database(s)) Neo4J/Cypher ETL (Extract, Transform, and Load) processes Airflow or other Data Pipeline technology Kafka Distributed Event Streaming platform Proficient or better in a scripting language, ideally Python Experience tuning and optimizing database performance Knowledge of modern data integration patterns Strong verbal and written communication skills and ability to work in a collaborative team environment, spanning global time zones Proactive approach and goal-oriented mindset Self-driven approach to research and problem solving with proven analytical skills Ability to manage tasks across multiple projects at the same time PREFERRED QUALIFICATIONS Pharmaceutical Experience Experience working with Agile delivery methodologies (e.g., Scrum) Experience with Graph Databases Experience with Snowflake Familiarity with cloud platforms such as AWS Experience with containerization technologies such as Docker and orchestration tools like Kubernetes PHYSICAL/MENTAL REQUIREMENTS None NON-STANDARD WORK SCHEDULE, TRAVEL OR ENVIRONMENT REQUIREMENTS Job will require working with global teams and applications. Flexible working schedule will be needed on occasion to accommodate planned agile sprint planning and system releases as well as unplanned/on-call level 3 support. Travel requirements are project based. Estimated percentage of travel to support project and departmental activities is less than 10%. Work Location Assignment: Hybrid Pfizer is an equal opportunity employer and complies with all applicable equal employment opportunity legislation in each jurisdiction in which it operates. Information & Business Tech #LI-PFE

Posted 1 day ago

Apply

3.0 - 10.0 years

0 Lacs

Chennai

Remote

At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. CMSTDR Senior (TechOps) KEY Capabilities: Experience in working with Splunk Enterprise, Splunk Enterprise Security & Splunk UEBA Minimum of Splunk Power User Certification Good knowledge in programming or Scripting languages such as Python (preferred), JavaScript (preferred), Bash, PowerShell, Bash, etc. Perform remote and on-site gap assessment of the SIEM solution. Define evaluation criteria & approach based on the Client requirement & scope factoring industry best practices & regulations Conduct interview with stakeholders, review documents (SOPs, Architecture diagrams etc.) Evaluate SIEM based on the defined criteria and prepare audit reports Good experience in providing consulting to customers during the testing, evaluation, pilot, production and training phases to ensure a successful deployment. Understand customer requirements and recommend best practices for SIEM solutions. Offer consultative advice in security principles and best practices related to SIEM operations Design and document a SIEM solution to meet the customer needs Experience in onboarding data into Splunk from various sources including unsupported (in-house built) by creating custom parsers Verification of data of log sources in the SIEM, following the Common Information Model (CIM) Experience in parsing and masking of data prior to ingestion in SIEM Provide support for the data collection, processing, analysis and operational reporting systems including planning, installation, configuration, testing, troubleshooting and problem resolution Assist clients to fully optimize the SIEM system capabilities as well as the audit and logging features of the event log sources Assist client with technical guidance to configure end log sources (in-scope) to be integrated to the SIEM Experience in handling big data integration via Splunk Expertise in SIEM content development which includes developing process for automated security event monitoring and alerting along with corresponding event response plans for systems Hands-on experience in development and customization of Splunk Apps & Add-Ons Builds advanced visualizations (Interactive Drilldown, Glass tables etc.) Build and integrate contextual data into notable events Experience in creating use cases under Cyber kill chain and MITRE attack framework Capability in developing advanced dashboards (with CSS, JavaScript, HTML, XML) and reports that can provide near real time visibility into the performance of client applications. Experience in installation, configuration and usage of premium Splunk Apps and Add-ons such as ES App, UEBA, ITSI etc Sound knowledge in configuration of Alerts and Reports. Good exposure in automatic lookup, data models and creating complex SPL queries. Create, modify and tune the SIEM rules to adjust the specifications of alerts and incidents to meet client requirement Work with the client SPOC to for correlation rule tuning (as per use case management life cycle), incident classification and prioritization recommendations Experience in creating custom commands, custom alert action, adaptive response actions etc. Qualification & experience: Minimum of 3 to 10 years’ experience with a depth of network architecture knowledge that will translate over to deploying and integrating a complicated security intelligence solution into global enterprise environments. Strong oral, written and listening skills are an essential component to effective consulting. Strong background in network administration. Ability to work at all layers of the OSI models, including being able to explain communication at any level is necessary. Must have knowledge of Vulnerability Management, Windows and Linux basics including installations, Windows Domains, trusts, GPOs, server roles, Windows security policies, user administration, Linux security and troubleshooting. Good to have below mentioned experience with designing and implementation of Splunk with a focus on IT Operations, Application Analytics, User Experience, Application Performance and Security Management Multiple cluster deployments & management experience as per Vendor guidelines and industry best practices Troubleshoot Splunk platform and application issues, escalate the issue and work with Splunk support to resolve issues Certification in any one of the SIEM Solution such as IBM QRadar, Exabeam, Securonix will be an added advantage Certifications in a core security related discipline will be an added advantage. EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today.

Posted 1 day ago

Apply

10.0 years

0 Lacs

Chennai

On-site

At EY, we’re all in to shape your future with confidence. We’ll help you succeed in a globally connected powerhouse of diverse teams and take your career wherever you want it to go. Join EY and help to build a better working world. EY- GDS - Data and Analytics – Informatica Manager As part of our EY-GDS D&A (Data and Analytics) team, we help our clients solve complex business challenges with the help of data and technology. We dive deep into data to extract the greatest value and discover opportunities in key business and functions like Banking, Insurance, Manufacturing, Healthcare, Retail, Manufacturing and Auto, Supply Chain, and Finance. Summary of Technologies: ETL Tools: Informatica PowerCenter, Informatica IDMC Databases: Oracle, DB2, IDMS Programming Languages: PL/SQL, SQL Data Modeling and Documentation Tools: Standard data modeling tools, data flow tools Data Warehousing Technologies: Data warehouse architecture and design Performance Tuning Tools: Oracle Database tuning tools, SQL tuning tools Roles and Responsibilities: Project Management: Oversee the planning, execution, and delivery of data integration projects using Informatica PowerCenter and IDMC. Coordinate with stakeholders to define project scope, objectives, and deliverables. Team Leadership: Manage and direct the tasks of a team with up to two resources, providing mentorship and guidance on best practices and technical skills. Foster a collaborative environment to encourage team development and knowledge sharing. Architecture and Design: Design and implement data integration architectures that meet business requirements, specifically in the context of Core Banking and COTs (Commercial Off-The-Shelf) products. Analyze data structures in existing legacy systems to design extract, transform, and load (ETL) processes and data warehouse data structures. Development and Implementation: Create, maintain, and optimize stored procedures, functions, inline SQL, database structures, and ETL processes to adapt to changing needs and requirements. Perform advanced ETL development activities using Informatica, PL/SQL, and Oracle Database tuning, including SQL tuning and Informatica Server administration. Develop data movement processes for the data warehouse. Performance Tuning: Monitor and optimize the performance of ETL processes and data integration workflows. Identify and resolve performance bottlenecks in data processing, ensuring efficient data movement. Data Quality Management: Implement data quality checks and validations to ensure the accuracy and integrity of data. Make recommendations, based on data profiling and analysis, for changes to existing/legacy systems, particularly in the area of automated data validations. Collaboration with IT and Business Teams: Work closely with IT and business team members in the design and implementation of the data warehouse. Collaborate with non-technical subject matter experts to understand underlying data behavior and characteristics, presenting findings in an accessible manner. Documentation and Reporting: Use standard data modeling, data flow, and data documentation tools to analyze, document, and present data analysis work. Maintain comprehensive documentation of data integration processes, workflows, and architecture. Provide regular status reports to management and stakeholders on project progress and challenges. Training and Support: Provide training and support to end-users and team members on Informatica tools and best practices. Act as a point of contact for troubleshooting and resolving issues related to data integration. Continuous Improvement: Stay updated with the latest trends and advancements in data integration technologies. Propose and implement improvements to existing processes and tools to enhance efficiency and effectiveness. To qualify for the role, you must have Education: BE/BTech/MCA with 10+ years of IT development experience, specifically in data integration and management. Informatica Expertise: Proven expertise in designing, architecting, developing, and implementing data engineering solutions using Informatica PowerCenter and IDMC. Cloud Platform Experience: Hands-on experience with public cloud platforms such as Snowflake, Azure, and AWS for data integration and management. Regulatory Knowledge: Strong understanding of Data Governance frameworks and compliance regulations, including BCBS, GDPR, HIPAA, and ACCORD. Data Documentation: Experience in producing Data Lineage Reports, Data Dictionaries, Business Glossaries, and identifying Key Business Entities (KBEs) and Critical Data Elements (CDEs) through data analysis. Data Integration Experience: 5+ years of experience in data lineage, data governance, data modeling, and data integration solutions, particularly with Informatica tools. Real-Time Data Processing: Strong exposure to real-time data processing methodologies and technologies within the Informatica ecosystem. Customer Interaction: Ability to effectively manage customer interactions and address issues in various situations. Data Management Knowledge: In-depth knowledge of Data Architecture, Data Modeling, and the adoption of best practices and policies in the Data Management space. Database and Warehousing Experience: Experience with databases (e.g., Oracle, DB2, IDMS), data warehousing, and high-performance computing environments. Presales and Presentations: Experience in presales activities, responding to RFPs, and delivering impactful customer presentations related to data integration solutions. Domain Experience: Nice to have experience in the Insurance and Finance domains, particularly in relation to data integration and management. Ideally, you’ll also have Client management skills What we look for People with technical experience and enthusiasm to learn new things in this fast-moving environment What working at EY offers At EY, we’re dedicated to helping our clients, from start-ups to Fortune 500 companies — and the work we do with them is as varied as they are. You get to work with inspiring and meaningful projects. Our focus is education and coaching alongside practical experience to ensure your personal development. We value our employees, and you will be able to control your own development with an individual progression plan. You will quickly grow into a responsible role with challenging and stimulating assignments. Moreover, you will be part of an interdisciplinary environment that emphasizes high quality and knowledge exchange. Plus, we offer: Support, coaching and feedback from some of the most engaging colleagues around Opportunities to develop new skills and progress your career The freedom and flexibility to handle your role in a way that’s right for you EY | Building a better working world EY is building a better working world by creating new value for clients, people, society and the planet, while building trust in capital markets. Enabled by data, AI and advanced technology, EY teams help clients shape the future with confidence and develop answers for the most pressing issues of today and tomorrow. EY teams work across a full spectrum of services in assurance, consulting, tax, strategy and transactions. Fueled by sector insights, a globally connected, multi-disciplinary network and diverse ecosystem partners, EY teams can provide services in more than 150 countries and territories.

Posted 1 day ago

Apply

7.0 years

6 - 10 Lacs

Bengaluru

On-site

At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. EY-Strategy and Transactions - SaT – DnA Associate Manager EY’s Data n’ Analytics team is a multi-disciplinary technology team delivering client projects and solutions across Data Management, Visualization, Business Analytics and Automation. The assignments cover a wide range of countries and industry sectors. The opportunity We’re looking for Associate Manager - Data Engineering. The main objective of the role is to support cloud and on-prem platform analytics and data engineering projects initiated across engagement teams. The role will primarily involve conceptualizing, designing, developing, deploying and maintaining complex technology solutions which help EY solve business problems for the clients. This role will work closely with technical architects, product and business subject matter experts (SMEs), back-end developers and other solution architects and is also on-shore facing. This role will be instrumental in designing, developing, and evolving the modern data warehousing solutions and data integration build-outs using cutting edge tools and platforms for both on-prem and cloud architectures. In this role you will be coming up with design specifications, documentation, and development of data migration mappings and transformations for a modern Data Warehouse set up/data mart creation and define robust ETL processing to collect and scrub both structured and unstructured data providing self-serve capabilities (OLAP) in order to create impactful decision analytics reporting. Your key responsibilities Evaluating and selecting data warehousing tools for business intelligence, data population, data management, metadata management and warehouse administration for both on-prem and cloud based engagements Strong working knowledge across the technology stack including ETL, ELT, data analysis, metadata, data quality, audit and design Design, develop, and test in ETL tool environment (GUI/canvas driven tools to create workflows) Experience in design documentation (data mapping, technical specifications, production support, data dictionaries, test cases, etc.) Provides technical leadership to a team of data warehouse and business intelligence developers Coordinate with other technology users to design and implement matters of data governance, data harvesting, cloud implementation strategy, privacy, and security Adhere to ETL/Data Warehouse development Best Practices Responsible for Data orchestration, ingestion, ETL and reporting architecture for both on-prem and cloud ( MS Azure/AWS/GCP) Assisting the team with performance tuning for ETL and database processes Skills and attributes for success Minimum of 7 years of total experience with 3+ years in Data warehousing/ Business Intelligence field Solid hands-on 3+ years of professional experience with creation and implementation of data warehouses on client engagements and helping create enhancements to a data warehouse Strong knowledge of data architecture for staging and reporting schemas ,data models and cutover strategies using industry standard tools and technologies Architecture design and implementation experience with medium to complex on-prem to cloud migrations with any of the major cloud platforms (preferably AWS/Azure/GCP) Minimum 3+ years experience in Azure database offerings [ Relational, NoSQL, Datawarehouse ] 2+ years hands-on experience in various Azure services preferred – Azure Data Factory,Kafka, Azure Data Explorer, Storage, Azure Data Lake, Azure Synapse Analytics ,Azure Analysis Services & Databricks Minimum of 3 years of hands-on database design, modeling and integration experience with relational data sources, such as SQL Server databases ,Oracle/MySQL, Azure SQL and Azure Synapse Strong in PySpark, SparkSQL Knowledge and direct experience using business intelligence reporting tools (Power BI, Alteryx, OBIEE, Business Objects, Cognos, Tableau, MicroStrategy, SSAS Cubes etc.) Strong creative instincts related to data analysis and visualization. Aggressive curiosity to learn the business methodology, data model and user personas. Strong understanding of BI and DWH best practices, analysis, visualization, and latest trends. Experience with the software development lifecycle (SDLC) and principles of product development such as installation, upgrade and namespace management Willingness to mentor team members Solid analytical, technical and problem solving skills Excellent written and verbal communication skills To qualify for the role, you must have Bachelor’s or equivalent degree in computer science, or related field, required. Advanced degree or equivalent business experience preferred Fact driven and analytically minded with excellent attention to details Hands-on experience with data engineering tasks such as building analytical data records and experience manipulating and analyzing large volumes of data Relevant work experience of minimum 6 to 8 years in a big 4 or technology/ consulting set up Ideally, you’ll also have Ability to think strategically/end-to-end with result-oriented mindset Ability to build rapport within the firm and win the trust of the clients Willingness to travel extensively and to work on client sites / practice office locations Experience in Snowflake What we look for A Team of people with commercial acumen, technical experience and enthusiasm to learn new things in this fast-moving environment An opportunity to be a part of market-leading, multi-disciplinary team of 1400 + professionals, in the only integrated global transaction business worldwide. Opportunities to work with EY SaT practices globally with leading businesses across a range of industries What working at EY offers At EY, we’re dedicated to helping our clients, from start–ups to Fortune 500 companies — and the work we do with them is as varied as they are. You get to work with inspiring and meaningful projects. Our focus is education and coaching alongside practical experience to ensure your personal development. We value our employees and you will be able to control your own development with an individual progression plan. You will quickly grow into a responsible role with challenging and stimulating assignments. Moreover, you will be part of an interdisciplinary environment that emphasizes high quality and knowledge exchange. Plus, we offer: Support, coaching and feedback from some of the most engaging colleagues around Opportunities to develop new skills and progress your career The freedom and flexibility to handle your role in a way that’s right for you EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today.

Posted 1 day ago

Apply

6.0 years

6 - 10 Lacs

Bengaluru

On-site

At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. EY-Strategy and Transactions - SaT– DnA Senior Analyst EY’s Data n’ Analytics team is a multi-disciplinary technology team delivering client projects and solutions across Data Management, Visualization, Business Analytics and Automation. The assignments cover a wide range of countries and industry sectors. The opportunity We’re looking for Senior Analyst - Data Engineering. The main objective of the role is to support cloud and on-prem platform analytics and data engineering projects initiated across engagement teams. The role will primarily involve conceptualizing, designing, developing, deploying and maintaining complex technology solutions which help EY solve business problems for the clients. This role will work closely with technical architects, product and business subject matter experts (SMEs), back-end developers and other solution architects and is also on-shore facing. This role will be instrumental in designing, developing, and evolving the modern data warehousing solutions and data integration build-outs using cutting edge tools and platforms for both on-prem and cloud architectures. In this role you will be coming up with design specifications, documentation, and development of data migration mappings and transformations for a modern Data Warehouse set up/data mart creation and define robust ETL processing to collect and scrub both structured and unstructured data providing self-serve capabilities (OLAP) in order to create impactful decision analytics reporting. Your key responsibilities Evaluating and selecting data warehousing tools for business intelligence, data population, data management, metadata management and warehouse administration for both on-prem and cloud based engagements Strong working knowledge across the technology stack including ETL, ELT, data analysis, metadata, data quality, audit and design Design, develop, and test in ETL tool environment (GUI/canvas driven tools to create workflows) Experience in design documentation (data mapping, technical specifications, production support, data dictionaries, test cases, etc.) Provides technical leadership to a team of data warehouse and business intelligence developers Coordinate with other technology users to design and implement matters of data governance, data harvesting, cloud implementation strategy, privacy, and security Adhere to ETL/Data Warehouse development Best Practices Responsible for Data orchestration, ingestion, ETL and reporting architecture for both on-prem and cloud ( MS Azure/AWS/GCP) Assisting the team with performance tuning for ETL and database processes Skills and attributes for success Minimum of 6 years of total experience with 3+ years in Data warehousing/ Business Intelligence field Solid hands-on 3+ years of professional experience with creation and implementation of data warehouses on client engagements and helping create enhancements to a data warehouse Strong knowledge of data architecture for staging and reporting schemas ,data models and cutover strategies using industry standard tools and technologies Architecture design and implementation experience with medium to complex on-prem to cloud migrations with any of the major cloud platforms (preferably AWS/Azure/GCP) Minimum 3+ years experience in Azure database offerings [ Relational, NoSQL, Datawarehouse ] 2+ years hands-on experience in various Azure services preferred – Azure Data Factory,Kafka, Azure Data Explorer, Storage, Azure Data Lake, Azure Synapse Analytics ,Azure Analysis Services & Databricks Minimum of 3 years of hands-on database design, modeling and integration experience with relational data sources, such as SQL Server databases ,Oracle/MySQL, Azure SQL and Azure Synapse Strong in PySpark, SparkSQL Knowledge and direct experience using business intelligence reporting tools (Power BI, Alteryx, OBIEE, Business Objects, Cognos, Tableau, MicroStrategy, SSAS Cubes etc.) Strong creative instincts related to data analysis and visualization. Aggressive curiosity to learn the business methodology, data model and user personas. Strong understanding of BI and DWH best practices, analysis, visualization, and latest trends. Experience with the software development lifecycle (SDLC) and principles of product development such as installation, upgrade and namespace management Willingness to mentor team members Solid analytical, technical and problem-solving skills Excellent written and verbal communication skills To qualify for the role, you must have Bachelor’s or equivalent degree in computer science, or related field, required. Advanced degree or equivalent business experience preferred Fact driven and analytically minded with excellent attention to details Hands-on experience with data engineering tasks such as building analytical data records and experience manipulating and analyzing large volumes of data Relevant work experience of minimum 6 to 8 years in a big 4 or technology/ consulting set up Ideally, you’ll also have Ability to think strategically/end-to-end with result-oriented mindset Ability to build rapport within the firm and win the trust of the clients Willingness to travel extensively and to work on client sites / practice office locations Experience in Snowflake What we look for A Team of people with commercial acumen, technical experience and enthusiasm to learn new things in this fast-moving environment An opportunity to be a part of market-leading, multi-disciplinary team of 1400 + professionals, in the only integrated global transaction business worldwide. Opportunities to work with EY SaT practices globally with leading businesses across a range of industries What working at EY offers At EY, we’re dedicated to helping our clients, from start–ups to Fortune 500 companies — and the work we do with them is as varied as they are. You get to work with inspiring and meaningful projects. Our focus is education and coaching alongside practical experience to ensure your personal development. We value our employees and you will be able to control your own development with an individual progression plan. You will quickly grow into a responsible role with challenging and stimulating assignments. Moreover, you will be part of an interdisciplinary environment that emphasizes high quality and knowledge exchange. Plus, we offer: Support, coaching and feedback from some of the most engaging colleagues around Opportunities to develop new skills and progress your career The freedom and flexibility to handle your role in a way that’s right for you EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today.

Posted 1 day ago

Apply

8.0 years

5 - 10 Lacs

Bengaluru

On-site

We help the world run better At SAP, we enable you to bring out your best. Our company culture is focused on collaboration and a shared passion to help the world run better. How? We focus every day on building the foundation for tomorrow and creating a workplace that embraces differences, values flexibility, and is aligned to our purpose-driven and future-focused work. We offer a highly collaborative, caring team environment with a strong focus on learning and development, recognition for your individual contributions, and a variety of benefit options for you to choose from. What you'll do: You will take an ownership of designing and building core integration frameworks that enable real-time, event-driven data flows between distributed SAP systems. As a senior contributor, you will work closely with architects to drive end-to-end development of services and pipelines supporting distributed data processing, data transformations and intelligent automation. This is an unique opportunity to contribute to SAP’s evolving data platform initiatives with hands-on involvement in Java, Python, Kafka, DevOps, Real-Time Analytics, Intelligent Monitoring, BTP and Hyperscaler ecosystems. Responsibilities: Design and develop Micro services using Java, RESTful APIs and messaging frameworks such as Apache Kafka. Design and Develop UI based on SAP UI5/Fiori is a plus Design and Develop Observability Framework for Customer Insights Build and maintain scalable data processing and ETL pipelines that support real-time and batch data flows. Experience with Databricks is an advantage. Accelerate the App2App integration roadmap by identifying reusable patterns, driving platform automation and establishing best practices. Collaborate with cross-functional teams to enable secure, reliable and performant communication across SAP applications. Build and maintain distributed data processing pipelines, supporting large-scale data ingestion, transformation and routing. Work closely with DevOps to define and improve CI/CD pipelines, monitoring and deployment strategies using modern GitOps practices. Guide cloud-native secure deployment of services on SAP BTP and major Hyperscaler (AWS, Azure, GCP). Collaborate with SAP’s broader Data Platform efforts including Datasphere, SAP Analytics Cloud and BDC runtime architecture. Ensure adherence to best practices in microservices architecture, including service discovery, load balancing, and fault tolerance. Stay updated with the latest industry trends and technologies to continuously improve the development process What you bring: 8+ years of hands-on experience in backend development using Java, with strong object-oriented design and integration patterns. Hands-on experience building ETL pipelines and working with large-scale data processing frameworks. Exposure to Log Aggregator Tools like Splunk, ELK , etc. Experience or experimentation with tools such as Databricks, Apache Spark or other cloud-native data platforms is highly advantageous. Familiarity with SAP Business Technology Platform (BTP), SAP Datasphere, SAP Analytics Cloud or HANA is highly desirable. Design CI/CD pipelines, containerization (Docker), Kubernetes and DevOps best practices. Working knowledge of Hyperscaler environments such as AWS, Azure or GCP. Passionate about clean code, automated testing, performance tuning and continuous improvement. Strong communication skills and ability to collaborate with global teams across time zones. Meet your Team SAP is the market leader in enterprise application software, helping companies of all sizes and industries run at their best. As part of the Business Data Cloud (BDC) organization, the Foundation Services team is pivotal to SAP’s Data & AI strategy, delivering next-generation data experiences that power intelligence across the enterprise. Located in Bangalore, India, our team drives cutting-edge engineering efforts in a collaborative, inclusive and high-impact environment, enabling innovation and integration across SAP’s data platform #DevT3 Bring out your best SAP innovations help more than four hundred thousand customers worldwide work together more efficiently and use business insight more effectively. Originally known for leadership in enterprise resource planning (ERP) software, SAP has evolved to become a market leader in end-to-end business application software and related services for database, analytics, intelligent technologies, and experience management. As a cloud company with two hundred million users and more than one hundred thousand employees worldwide, we are purpose-driven and future-focused, with a highly collaborative team ethic and commitment to personal development. Whether connecting global industries, people, or platforms, we help ensure every challenge gets the solution it deserves. At SAP, you can bring out your best. We win with inclusion SAP’s culture of inclusion, focus on health and well-being, and flexible working models help ensure that everyone – regardless of background – feels included and can run at their best. At SAP, we believe we are made stronger by the unique capabilities and qualities that each person brings to our company, and we invest in our employees to inspire confidence and help everyone realize their full potential. We ultimately believe in unleashing all talent and creating a better and more equitable world. SAP is proud to be an equal opportunity workplace and is an affirmative action employer. We are committed to the values of Equal Employment Opportunity and provide accessibility accommodations to applicants with physical and/or mental disabilities. If you are interested in applying for employment with SAP and are in need of accommodation or special assistance to navigate our website or to complete your application, please send an e-mail with your request to Recruiting Operations Team: Careers@sap.com For SAP employees: Only permanent roles are eligible for the SAP Employee Referral Program, according to the eligibility rules set in the SAP Referral Policy. Specific conditions may apply for roles in Vocational Training. EOE AA M/F/Vet/Disability: Qualified applicants will receive consideration for employment without regard to their age, race, religion, national origin, ethnicity, age, gender (including pregnancy, childbirth, et al), sexual orientation, gender identity or expression, protected veteran status, or disability. Successful candidates might be required to undergo a background verification with an external vendor. Requisition ID: 430165 | Work Area: Software-Design and Development | Expected Travel: 0 - 10% | Career Status: Professional | Employment Type: Regular Full Time | Additional Locations: #LI-Hybrid.

Posted 1 day ago

Apply

5.0 years

3 - 4 Lacs

Bengaluru

On-site

June 1, 2025 Type Full Time Location Bangalore A pure play Salesforce and MuleSoft partner since 2021, Cloud Odyssey brings together the enthusiasm and innovation of a young company with the extensive experience and expertise of its people. With a global track record of over 100 successful projects, a consistent 5/5 customer satisfaction rating, and deep domain expertise across key industries, we’re quickly establishing ourselves as the go-to partner for Salesforce and MuleSoft in Asia, Europe and North America. To support our continuous growth, we are looking for Salesforce B2C Lead to join an amazing team of like-minded professionals and help our customers to transform the way they work and improve business performance. Key Responsibilities: Develop storefront solutions leveraging the latest SFRA, customize and maintain scalable B2C e-commerce applications using SFCC. Build responsive and dynamic UI/UX components using HTML, CSS, JavaScript, and front-end frameworks. Optimize performance and ensure cross-browser compatibility for storefronts. Develop and integrate with third-party systems like payment gateways, shipping providers, and marketing tools using REST/SOAP APIs. Work with Business Manager configurations, including promotions, catalog setup and site settings. Conduct code reviews and performance tuning to ensure scalability and efficiency. Troubleshoot and resolve technical issues related to performance, functionality, or integrations. Work closely with project managers, UX designers, QA teams, BA and business stakeholders to ensure seamless project delivery. Contribute to technical design documentation and best practices. Qualifications: 5+ years of hands-on experience in Salesforce Commerce Cloud development. Proven experience with SFRA development is a must. Technical Skills: Proficiency in Demandware Script, ISML and JavaScript. Strong knowledge of HTML, CSS, and modern frontend frameworks like React. Experience with backend integrations using REST/SOAP APIs. Familiarity with version control systems (e.g., Git). Knowledge of CI/CD pipelines is a plus. Experience with Agile methodologies and tools (e.g., Jira). Understanding of SEO principles and accessibility standards. Certifications : Salesforce B2C Commerce Developer. Job Category: Salesforce Job Type: Full Time Job Location: Bangalore

Posted 1 day ago

Apply

6.0 - 10.0 years

4 - 9 Lacs

Bengaluru

On-site

About Us Zycus is a pioneer in Cognitive Procurement software and has been a trusted partner of choice for large global enterprises for two decades. Zycus has been consistently recognized by Gartner, Forrester, and other analysts for its Source to Pay integrated suite. Zycus powers its S2P software with the revolutionary Merlin AI Suite. Merlin AI takes over the tactical tasks and empowers procurement and AP officers to focus on strategic projects; offers data-driven actionable insights for quicker and smarter decisions, and its conversational AI offers a B2C type user-experience to the end-users. Zycus helps enterprises drive real savings, reduce risks, and boost compliance, and its seamless, intuitive, and easy-to-use user interface ensures high adoption and value across the organization. Start your #CognitiveProcurement journey with us, as you are #MeantforMore We Are An Equal Opportunity Employer: Zycus is committed to providing equal opportunities in employment and creating an inclusive work environment. We do not discriminate against applicants on the basis of race, color, religion, gender, sexual orientation, national origin, age, disability, or any other legally protected characteristic. All hiring decisions will be based solely on qualifications, skills, and experience relevant to the job requirements. Job Description Zycus is seeking a DevOps Manager who combines strong technical expertise with leadership abilities to scale our DevOps practices and infrastructure. You will lead a team of engineers focused on automation, system scalability, security, and CI/CD delivery — while actively exploring AI-based innovations (AIOps, LLMs) to drive predictive monitoring, auto-remediation, and intelligent alerting Key Responsibilities : DevOps & Cloud Infrastructure Design, implement, and manage secure, scalable infrastructure across AWS/Azure/GCP. Drive cost optimization, performance tuning, and disaster recovery strategies. Lead adoption of best practices across high-availability and fault-tolerant systems. ️ Containerization & Orchestration Manage containerized environments using Docker , Helm , and Kubernetes (EKS/Rancher/OCP). Ensure secure, reliable orchestration and performance monitoring at scale. Infrastructure as Code (IaC) Oversee the implementation and maintenance of IaC using Terraform , Ansible , or equivalent tools. Ensure all configurations are version-controlled and environment-consistent. CI/CD Automation Architect and continuously improve CI/CD pipelines using Jenkins, ArgoCD, Tekton, etc. Enable fast, secure, and high-quality code delivery in coordination with development and QA. ️ Scripting & Automation Guide scripting efforts in Python or Shell or Ansible or similar to automate deployment, scaling, and incident response. Identify opportunities to eliminate manual interventions. Observability & Monitoring Define and implement robust logging, monitoring, and alerting systems using Prometheus , Grafana , ELK , CloudWatch , etc. Drive an AI-driven approach to predictive analytics and anomaly detection. AI-Driven DevOps (AIOps) Explore and integrate AI/ML solutions such as LLMs and GPTs for intelligent insights, chatOps, and self-healing infrastructure. Drive POCs and deployment of tools like Moogsoft , Datadog AI , Dynatrace , etc. Team Leadership & Collaboration Lead a team of DevOps engineers; set goals, mentor, review performance, and drive continuous improvement. Collaborate cross-functionally with product, engineering, QA, and security teams to align on DevOps objectives. Job Requirement 6–10 years of hands-on DevOps/SRE experience, with at least 2 years in a leadership or managerial role. Strong cloud experience with AWS and/or Azure , including services like EC2, S3, RDS, VPC, IAM, Lambda. Expertise in Docker , Kubernetes , Helm , and related tools in production environments. Proficient in Terraform , Ansible , and CI/CD tools (Jenkins, ArgoCD, Tekton). Excellent scripting ability in Python , Shell , or similar. Strong troubleshooting, analytical thinking, and incident management. Experience managing monitoring/logging stacks (Prometheus, Grafana, ELK, CloudWatch). Exposure to LLMs , AIOps , or AI-based DevOps practices is highly desirable . Proven experience in leading projects, mentoring engineers, and stakeholder communication. Preferred Qualifications Relevant certifications: AWS Certified Solutions Architect Certified Kubernetes Administrator (CKA) Terraform Associate Knowledge of HA Proxy , Nginx , CDNs . Understanding of DevSecOps and infrastructure security. Familiarity with integrating LLMs , GPTs , and AI for infrastructure or developer tooling enhancements. Five Reasons Why You Should Join Zycus : Cloud Product Company: We are a Cloud SaaS Company and our products are created by using the latest technologies like ML and AI. Our UI is in Angular JS and we are developing our mobile apps using React. A Market Leader: Zycus is recognized by Gartner (world’s leading market research analyst) as a Leader in Procurement Software Suites. Move between Roles: We believe that change leads to growth and therefore we allow our employees to shift careers and move to different roles and functions within the organization Get a Global Exposure: You get to work and deal with our global customers. Create an Impact: Zycus gives you the environment to create an impact on the product and transform your ideas into reality. Even our junior engineers get the opportunity to work on different product features. About Us Zycus is a pioneer in Cognitive Procurement software and has been a trusted partner of choice for large global enterprises for two decades. Zycus has been consistently recognized by Gartner, Forrester, and other analysts for its Source to Pay integrated suite. Zycus powers its S2P software with the revolutionary Merlin AI Suite. Merlin AI takes over the tactical tasks and empowers procurement and AP officers to focus on strategic projects; offers data-driven actionable insights for quicker and smarter decisions, and its conversational AI offers a B2C type user-experience to the end-users. Zycus helps enterprises drive real savings, reduce risks, and boost compliance, and its seamless, intuitive, and easy-to-use user interface ensures high adoption and value across the organization. Start your #CognitiveProcurement journey with us, as you are #MeantforMore

Posted 1 day ago

Apply

0 years

0 Lacs

Bengaluru

On-site

At EY, we’re all in to shape your future with confidence. We’ll help you succeed in a globally connected powerhouse of diverse teams and take your career wherever you want it to go. Join EY and help to build a better working world. Cognitive Service and IBM AI Developer Required Skills on Cognitive Services, IBM and Python Natural Language Processing (NLP) Intent recognition and entity extraction NLP libraries: spaCy, Rasa NLU, NLTK, Transformers (Hugging Face) Language model fine-tuning (optional for advanced bots) Conversational Design Dialogue flow design Context management Multi-turn conversation handling Backend Development Programming languages: Python, Node.js Frameworks: IBM Watson/ Rasa/Botpress/Microsoft Bot Framework (self-hosted) API development and integration (RESTful APIs) Frontend & UI Integration Web chat UIs (React, Angular, Vue) Messaging platform integration (e.g., custom web chat, WhatsApp via Twilio, etc.) Voice interface (optional): integration with speech-to-text and text-to-speech engines Watson Assistant: Designing, training, and deploying conversational agents (chatbots and virtual assistants). Watson Discovery: Implementing intelligent document search and insights extraction. Watson Natural Language Understanding (NLU): Sentiment analysis, emotion detection, and entity extraction. Watson Speech Services: Speech-to-text and text-to-speech integration. Watson Knowledge Studio: Building custom machine learning models for domain-specific NLP. Watson OpenScale: Monitoring AI model performance, fairness, and explainability. Core Python: Data structures, OOP, exception handling. API Integration: Consuming REST APIs (e.g., Azure SDKs). Data Processing: Using pandas, NumPy, and json for data manipulation. AI/ML Libraries: Familiarity with scikit-learn, transformers, or OpenAI SDKs. Required Skills on Other Skills (Front End) Component-Based Architecture: Building reusable functional components. Hooks: useState, useEffect, useContext, and custom hooks. State Management: Context API, Redux Toolkit, or Zustand. API Integration: Fetching and displaying data from API services. Soft Skills Excellent Communication Skills Team Player Self-starter and highly motivated Ability to handle high pressure and fast paced situations Excellent presentation skills Ability to work with globally distributed teams Roles and Responsibilities: Understand existing application architecture and solution design Design individual components and develop the components Work with other architects, leads, team members in an agile scrum environment Hands on development Design and develop applications that can be hosted on Azure cloud Design and develop framework and core functionality Identify the gaps and come up with working solutions Understand enterprise application design framework and processes Lead or Mentor junior and/or mid-level developers Review code and establish best practices Look out for latest technologies and match up with EY use case and solve business problems efficiently Ability to look at the big picture Proven experience in designing highly secured and scalable web applications on Azure cloud Keep management up to date with the progress Work under Agile design, development framework Good hands on development experience required EY | Building a better working world EY is building a better working world by creating new value for clients, people, society and the planet, while building trust in capital markets. Enabled by data, AI and advanced technology, EY teams help clients shape the future with confidence and develop answers for the most pressing issues of today and tomorrow. EY teams work across a full spectrum of services in assurance, consulting, tax, strategy and transactions. Fueled by sector insights, a globally connected, multi-disciplinary network and diverse ecosystem partners, EY teams can provide services in more than 150 countries and territories.

Posted 1 day ago

Apply

0 years

6 - 11 Lacs

Bengaluru

On-site

Summary Gainwell Technologies is seeking a highly skilled AI and GEN AI Engineer to design, develop, and deploy advanced AI and Generative AI (Gen AI) solutions, across our healthcare technology platforms. This role involves building and optimizing AI and GEN AI technologies, integrating them into existing systems, and ensuring their effectiveness in improving healthcare outcomes and operational efficiency while maintaining compliance with industry standards. Role Description Development based on AI/ GEN AI– Design, build, and train machine learning, deep learning, time series models, Gen AI (Multi modal LLMs), predictive analytics, Natural Language Processing, Image Processing solutions for healthcare applications. Experienced in multiple LLM fine tuning techniques. Experienced in building GEN AI solutions using RAG architecture. Skilled in both Lang Chain and Lang Graph. Experience in Agentic AI Frameworks and Workflow using Lang Chain, Lang Graph or Crew AI or Open AI Swarm. Experienced in multiple Vector Databases as well as Graph Database Skilled in Agentic AI Framework and has built at least one solution using Agentic AI. End-to-End AI Solution Deployment – Develop, test, and deploy AI solutions in cloud and on-premises environments, ensuring reliability, scalability, and real-world impact. Data Processing – Work with large healthcare datasets, performing data preprocessing, feature engineering, and model training while ensuring compliance with HIPAA and other regulatory standards. System Integration – Skilled in API development and integrations. Implement and optimize AI models within Gain well’s existing technology stack, collaborating with software engineers to ensure seamless integration. Experienced in ML Ops and LLM Ops. Experienced in evaluating models and continuous performance monitoring of both ML/DL and LLMs. Experienced in applying security measures in GEN AI solutions, implementing guard rails. Performance Optimization – Continuously monitor, refine, and optimize AI / GEN AI models for accuracy, efficiency, and speed, leveraging ML Ops and LLM Ops best practices. AI Research & Innovation – Stay updated with the latest AI/ML/ GEN AI advancements, exploring new technologies and methodologies to enhance solution effectiveness. #LI-DNI

Posted 1 day ago

Apply

5.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Avant de postuler à un emploi, sélectionnez votre langue de préférence parmi les options disponibles en haut à droite de cette page. Découvrez votre prochaine opportunité au sein d'une organisation qui compte parmi les 500 plus importantes entreprises mondiales. Envisagez des opportunités innovantes, découvrez notre culture enrichissante et travaillez avec des équipes talentueuses qui vous poussent à vous développer chaque jour. Nous savons ce qu’il faut faire pour diriger UPS vers l'avenir : des personnes passionnées dotées d’une combinaison unique de compétences. Si vous avez les qualités, de la motivation, de l'autonomie ou le leadership pour diriger des équipes, il existe des postes adaptés à vos aspirations et à vos compétences d'aujourd'hui et de demain. Fiche De Poste Moving our world forward by delivering what matters! UPS is a company with a proud past and an even brighter future. Our values define us. Our culture differentiates us. Our strategy drives us. At UPS we are customer first, people led and innovation driven. UPS’s India based Technology Development Centers will bring UPS one step closer to creating a global technology workforce that will help accelerate our digital journey and help us engineer technology solutions that drastically improve our competitive advantage in the field of Logistics. ‘Future You’ grows as a visible and valued Technology professional with UPS, driving us towards an exciting tomorrow. As a global Technology organization we can put serious resources behind your development. If you are solutions orientated, UPS Technology is the place for you. ‘Future You’ delivers ground-breaking solutions to some of the biggest logistics challenges around the globe. You’ll take technology to unimaginable places and really make a difference for UPS and our customers. Job Summary This position provides input, support, and performs full systems life cycle management activities (e.g., analyses, technical requirements, design, coding, testing, implementation of systems and applications software, etc.). He/She participates in component and data architecture design, technology planning, and testing for Applications Development (AD) initiatives to meet business requirements. This position provides input to applications development project plans and integrations. He/She collaborates with teams and supports emerging technologies to ensure effective communication and achievement of objectives. This position provides knowledge and support for applications development, integration, and maintenance. He/She provides input to department and project teams on decisions supporting projects. Responsibilities Senior App Developer with minimum 5+ years’ experience developing applications Experience creating REST services (APIs) keeping microservice design patterns in mind Familiarity with Spanner / SQL Server Experience creating integration services to consume/process data from other systems Familiarity with GCP PubSub / AMQP is helpful needed Able to create CI/CD pipeline for the above services (Jenkins / Terraform) Able to create relevant documentation for each of the services Perform design reviews and code reviews Experience providing real time knowledge transfer to UPS team Establish UPS best practices in design/coding/testing Provide best practices for performance tuning Familiarity with testing, automation, and BDD testing frameworks is desired as well. Provide best practices for distributed logging and aggregating them to have appropriate instrumentation for all services Develop microservices keeping in mind best practices for cloud native applications (GCP GKE / OpenShift) Qualifications Bachelor’s Degree or International equivalent Bachelor's Degree or International equivalent in Computer Science, Information Systems, Mathematics, Statistics, or related field - Preferred Type De Contrat en CDI Chez UPS, égalité des chances, traitement équitable et environnement de travail inclusif sont des valeurs clefs auxquelles nous sommes attachés.

Posted 1 day ago

Apply

4.0 years

8 - 9 Lacs

Bengaluru

On-site

Description Oracle’s Cloud Infrastructure team is building Infrastructure-as-a-Service technologies that operate at high scale in a broadly distributed multi-tenant cloud environment. Our customers run their businesses on our cloud, and our mission is to provide them with best-in-class compute, storage, networking, database, security, messaging, and an ever-expanding set of foundational cloud-based services. As a Senior Member of Technical Staff, you will own the software development for major components of Oracle’s Cloud Infrastructure. You should be both a rock-solid coder and a distributed systems generalist, able to dive deep into any part of the stack and low-level systems, as well as design broad distributed system interactions. You should value simplicity and scale, work comfortably in a collaborative, agile environment, and be excited to learn. Oracle Notifications Service is a fully managed, multi-tenant pub/sub service which pushes and fans out messages to third party endpoints at scale. Built on top of Oracle Streaming, the service deals with complex back pressure, noisy neighbor, extensibility and scaling challenges. About You You work backward, starting from the user. You care about creating usable, useful software that solves real problems and brings delight to users. You have solid communication skills. You can clearly explain complex technical concepts. You work well with non-engineers. You can lead a conversation in a room with designers, engineers, and product managers. You are comfortable with ambiguity. You have a strong sense of ownership, and are able to drive development of new projects and features to completion. You are comfortable working at all levels of the stack. Minimum Qualifications BS in Computer Science, or equivalent experience 4+ years of experience shipping services software Demonstrated ability to write great code using Java, Python, GoLang, C#, or similar OO languages Strong knowledge of data structures, algorithms, operating systems, and distributed systems fundamentals. Working familiarity with networking protocols (TCP/IP, HTTP) and standard network architectures. Strong understanding of databases, NoSQL systems, storage and distributed persistence technologies. Strong troubleshooting and performance tuning skills. Preferred Qualifications: MS in Computer Science Experience in a start-up environment Experience delivering and operating large scale, highly available distributed systems. Strong grasp of Unix-like operating systems Experience building multi-tenant, virtualized infrastructure a strong plus. Position is based in Bangalore, Karnataka, India As a member of the software engineering division, you will assist in defining and developing software for tasks associated with the developing, debugging or designing of software applications or operating systems. Provide technical leadership to other software developers. Specify, design and implement modest changes to existing software architecture to meet changing needs.

Posted 1 day ago

Apply

3.0 - 7.0 years

3 - 8 Lacs

Bengaluru

On-site

Job requisition ID :: 86432 Date: Jul 26, 2025 Location: Bengaluru Designation: Consultant Entity: Deloitte Touche Tohmatsu India LLP Your potential, unleashed. India’s impact on the global economy has increased at an exponential rate and Deloitte presents an opportunity to unleash and realise your potential amongst cutting edge leaders, and organisations shaping the future of the region, and indeed, the world beyond. At Deloitte, your whole self to work, every day. Combine that with our drive to propel with purpose and you have the perfect playground to collaborate, innovate, grow, and make an impact that matters. The team Enterprise technology has to do much more than keep the wheels turning; it is the engine that drives functional excellence and the enabler of innovation and long-term growth.Learn more about ET&P Your work profile As Consultant in our Oracle Team you’ll build and nurture positive working relationships with teams and clients with the intention to exceed client expectations: - Design, develop, and maintain PL/SQL packages, procedures, functions, triggers, and scripts. Develop and optimize complex SQL queries for performance and scalability. Analyze data requirements and translate them into database solutions. Collaborate with application developers and business analysts to design and implement database solutions. Perform data modeling and database design as needed. Maintain and enhance existing PL/SQL code and troubleshoot issues. Develop and maintain technical documentation related to database applications. Participate in code reviews and ensure adherence to best practices. Monitor and optimize database performance and storage. Support QA and production environments, including issue resolution and root cause analysis. Required Skills and Qualifications: Bachelor’s degree in Computer Science, Information Technology, or related field. 3–7 years of experience in Oracle PL/SQL development. Strong understanding of relational database concepts and data modeling. Experience with Oracle 11g/12c/19c. Proficiency in writing complex PL/SQL packages, procedures, and functions. Experience with performance tuning and query optimization. Familiarity with tools like TOAD, SQL Developer, or similar. Knowledge of version control systems (e.g., Git, SVN). Strong problem-solving and analytical skills. Excellent communication and teamwork skills. Location and way of working Base location: Bangalore This profile involves occasional travelling to client locations OR this profile does not involve extensive travel for work. Hybrid is our default way of working. Each domain has customised the hybrid approach to their unique nee Location and way of working Base location: Bangalore This profile involves occasional travelling to client locations OR this profile does not involve extensive travel for work. Hybrid is our default way of working. Each domain has customised the hybrid approach to their unique needs. Your role as a Consultant We expect our people to embrace and live our purpose by challenging themselves to identify issues that are most important for our clients, our people, and for society. In addition to living our purpose, Analyst across our organization must strive to be: Inspiring - Leading with integrity to build inclusion and motivation Committed to creating purpose - Creating a sense of vision and purpose Agile - Achieving high-quality results through collaboration and Team unity Skilled at building diverse capability - Developing diverse capabilities for the future Persuasive / Influencing - Persuading and influencing stakeholders Collaborating - Partnering to build new solutions Delivering value - Showing commercial acumen Committed to expanding business - Leveraging new business opportunities Analytical Acumen - Leveraging data to recommend impactful approach and solutions through the power of analysis and visualization Effective communication – Must be well abled to have well-structured and well-articulated conversations to achieve win-win possibilities Engagement Management / Delivery Excellence - Effectively managing engagement(s) to ensure timely and proactive execution as well as course correction for the success of engagement(s) Managing change - Responding to changing environment with resilience Managing Quality & Risk - Delivering high quality results and mitigating risks with utmost integrity and precision Strategic Thinking & Problem Solving - Applying strategic mindset to solve business issues and complex problems Tech Savvy - Leveraging ethical technology practices to deliver high impact for clients and for Deloitte Empathetic leadership and inclusivity - creating a safe and thriving environment where everyone's valued for who they are, use empathy to understand others to adapt our behaviours and attitudes to become more inclusive. How you’ll grow Connect for impact Our exceptional team of professionals across the globe are solving some of the world’s most complex business problems, as well as directly supporting our communities, the planet, and each other. Know more in our Global Impact Report and our India Impact Report. Empower to lead You can be a leader irrespective of your career level. Our colleagues are characterised by their ability to inspire, support, and provide opportunities for people to deliver their best and grow both as professionals and human beings. Know more about Deloitte and our One Young World partnership. Inclusion for all At Deloitte, people are valued and respected for who they are and are trusted to add value to their clients, teams and communities in a way that reflects their own unique capabilities. Know more about everyday steps that you can take to be more inclusive. At Deloitte, we believe in the unique skills, attitude and potential each and every one of us brings to the table to make an impact that matters. Drive your career At Deloitte, you are encouraged to take ownership of your career. We recognise there is no one size fits all career path, and global, cross-business mobility and up / re-skilling are all within the range of possibilities to shape a unique and fulfilling career. Know more about Life at Deloitte. Everyone’s welcome… entrust your happiness to us Our workspaces and initiatives are geared towards your 360-degree happiness. This includes specific needs you may have in terms of accessibility, flexibility, safety and security, and caregiving. Here’s a glimpse of things that are in store for you. Interview tips We want job seekers exploring opportunities at Deloitte to feel prepared, confident and comfortable. To help you with your interview, we suggest that you do your research, know some background about the organisation and the business area you’re applying to. Check out recruiting tips from Deloitte professionals.

Posted 1 day ago

Apply

4.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Before you apply to a job, select your language preference from the options available at the top right of this page. Explore your next opportunity at a Fortune Global 500 organization. Envision innovative possibilities, experience our rewarding culture, and work with talented teams that help you become better every day. We know what it takes to lead UPS into tomorrow—people with a unique combination of skill + passion. If you have the qualities and drive to lead yourself or teams, there are roles ready to cultivate your skills and take you to the next level. Job Description Responsibilities: System Reliability: Ensure the reliability and uptime of critical services and infrastructure. Google Cloud Expertise: Design, implement, and manage cloud infrastructure using Google Cloud services. Automation: Develop and maintain automation scripts and tools to improve system efficiency and reduce manual intervention. Monitoring and Incident Response: Implement monitoring solutions and respond to incidents to minimize downtime and ensure quick recovery. Collaboration: Work closely with development and operations teams to improve system reliability and performance. Capacity Planning: Conduct capacity planning and performance tuning to ensure systems can handle future growth. Documentation: Create and maintain comprehensive documentation for system configurations, processes, and procedures. Qualifications Education: Bachelor’s degree in computer science, Engineering, or a related field. Experience: 4+ years of experience in site reliability engineering or a similar role. Skills Proficiency in Google Cloud services (Compute Engine, Kubernetes Engine, Cloud Storage, BigQuery, Pub/Sub, etc.). Familiarity with Google BI and AI/ML tools (Looker, BigQuery ML, Vertex AI, etc.) Experience with automation tools (Terraform, Ansible, Puppet). Familiarity with CI/CD pipelines and tools (Azure pipelines Jenkins, GitLab CI, etc.). Strong scripting skills (Python, Bash, etc.). Knowledge of networking concepts and protocols. Experience with monitoring tools (Prometheus, Grafana, etc.). Preferred Certifications Google Cloud Professional DevOps Engineer Google Cloud Professional Cloud Architect Red Hat Certified Engineer (RHCE) or similar Linux certification Employee Type Permanent UPS is committed to providing a workplace free of discrimination, harassment, and retaliation.

Posted 1 day ago

Apply

8.0 - 12.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Avant de postuler à un emploi, sélectionnez votre langue de préférence parmi les options disponibles en haut à droite de cette page. Découvrez votre prochaine opportunité au sein d'une organisation qui compte parmi les 500 plus importantes entreprises mondiales. Envisagez des opportunités innovantes, découvrez notre culture enrichissante et travaillez avec des équipes talentueuses qui vous poussent à vous développer chaque jour. Nous savons ce qu’il faut faire pour diriger UPS vers l'avenir : des personnes passionnées dotées d’une combinaison unique de compétences. Si vous avez les qualités, de la motivation, de l'autonomie ou le leadership pour diriger des équipes, il existe des postes adaptés à vos aspirations et à vos compétences d'aujourd'hui et de demain. Fiche De Poste Job Title: Senior Data Developer – Azure ADF and Databricks Experience Range: 8-12 Years Location: Chennai, Hybrid Employment Type: Full-Time About UPS UPS is a global leader in logistics, offering a broad range of solutions that include transportation, distribution, supply chain management, and e-commerce. Founded in 1907, UPS operates in over 220 countries and territories, delivering packages and providing specialized services worldwide. Our mission is to enable commerce by connecting people, places, and businesses, with a strong focus on sustainability and innovation. About UPS Supply Chain Symphony™ The UPS Supply Chain Symphony™ platform is a cloud-based solution that seamlessly integrates key supply chain components, including shipping, warehousing, and inventory management, into a unified platform. This solution empowers businesses by offering enhanced visibility, advanced analytics, and customizable dashboards to streamline global supply chain operations and decision-making. About The Role We are seeking an experienced Senior Data Developer to join our data engineering team responsible for building and maintaining complex data solutions using Azure Data Factory (ADF), Azure Databricks , and Cosmos DB . The role involves designing and developing scalable data pipelines, implementing data transformations, and ensuring high data quality and performance. The Senior Data Developer will work closely with data architects, testers, and analysts to deliver robust data solutions that support strategic business initiatives. The ideal candidate should possess deep expertise in big data technologies, data integration, and cloud-native data engineering solutions on Microsoft Azure. This role also involves coaching junior developers, conducting code reviews, and driving strategic improvements in data architecture and design patterns. Key Responsibilities Data Solution Design and Development: Design and develop scalable and high-performance data pipelines using Azure Data Factory (ADF). Implement data transformations and processing using Azure Databricks. Develop and maintain NoSQL data models and queries in Cosmos DB. Optimize data pipelines for performance, scalability, and cost efficiency. Data Integration and Architecture: Integrate structured and unstructured data from diverse data sources. Collaborate with data architects to design end-to-end data flows and system integrations. Implement data security, governance, and compliance standards. Performance Tuning and Optimization: Monitor and tune data pipelines and processing jobs for performance and cost efficiency. Optimize data storage and retrieval strategies for Azure SQL and Cosmos DB. Collaboration and Mentoring: Collaborate with cross-functional teams including data testers, architects, and business analysts. Conduct code reviews and provide constructive feedback to improve code quality. Mentor junior developers, fostering best practices in data engineering and cloud development. Primary Skills Data Engineering: Azure Data Factory (ADF), Azure Databricks. Cloud Platform: Microsoft Azure (Data Lake Storage, Cosmos DB). Data Modeling: NoSQL data modeling, Data warehousing concepts. Performance Optimization: Data pipeline performance tuning and cost optimization. Programming Languages: Python, SQL, PySpark Secondary Skills DevOps and CI/CD: Azure DevOps, CI/CD pipeline design and automation. Security and Compliance: Implementing data security and governance standards. Agile Methodologies: Experience in Agile/Scrum environments. Leadership and Mentoring: Strong communication and coaching skills for team collaboration. Soft Skills Strong problem-solving abilities and attention to detail. Excellent communication skills, both verbal and written. Effective time management and organizational capabilities. Ability to work independently and within a collaborative team environment. Strong interpersonal skills to engage with cross-functional teams. Educational Qualifications Bachelor's degree in Computer Science, Engineering, Information Technology, or a related field. Relevant certifications in Azure and Data Engineering, such as: Microsoft Certified: Azure Data Engineer Associate Microsoft Certified: Azure Solutions Architect Expert Databricks Certified Data Engineer Associate or Professional About The Team As a Senior Data Developer , you will be working with a dynamic, cross-functional team that includes developers, product managers, and other quality engineers. You will be a key player in the quality assurance process, helping shape testing strategies and ensuring the delivery of high-quality web applications. Type De Contrat en CDI Chez UPS, égalité des chances, traitement équitable et environnement de travail inclusif sont des valeurs clefs auxquelles nous sommes attachés.

Posted 1 day ago

Apply

8.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Job Description Summary The role entails advanced software development for Power Systems Applications, with a focus on delivering specific functionalities to meet corporate project and product objectives. Responsibilities include collaborating with team working with Electric Utilities or Independent System Operators (ISOs) and Transmission and Distribution System Operators to develop functional software specifications, followed by designing, coding, testing, integration, application tuning, and delivery Job Description Roles and Responsibilities As a senior member of the Software Center of Excellence, exemplifying high-quality design, development, testing, and delivery practices. Responsible for enhancing, evolving, and supporting high-availability Electricity Energy Market Management System (MMS). Lead the design, development, testing, integration, and tuning of advanced Power Systems Application software to fulfill project and product commitments. Develop and evolve software in a dynamic and agile environment using the latest technologies and infrastructure. Provide domain knowledge and/or technical leadership to a team of electricity markets application software engineers. Provide budget estimates for new project tasks to project leads and managers. Collaborate with customers throughout the project lifecycle to ensure software quality and functionality meet standards and requirements. Mentor junior team members. Interact with Product Development Teams, Customers, Solution Providers, and cross-functional teams as needed. Apply SDLC principles and methodologies like Lean/Agile/XP, CI, software and product security, scalability, and testing techniques. Provide maintenance of power systems application functionality, including code fixes, creating tools for model conversion, documentation, and user interfaces. Support marketing efforts for proposals and demonstrations to potential customers. Basic Qualification Ph. D. or Master’s degree in Electrical Power Systems with thesis or related work in power systems 8 to 11 years of experience in development or project delivery, preferably in Power Systems Analysis, C++, CIM Modeling, Energy management System, Data Analysis, Scripting, Systems Integration Desired Characteristics Continuous improvement mindset; drives change initiatives and process improvements Highly organized and efficient; adept at prioritizing and executing tasks. Experience in the power systems domain. Proficiency in testing and test automation. Strong knowledge of source control management, particularly GitHub. Demonstrated ability to learn new development practices, languages, and tools. Self-motivated; able to synthesize information from diverse sources. Mentors newer team members in alignment with business objectives Continuously measures the completion rate of personal deliverables and compares them to the scheduled commitments. Transparent in problem-solving approaches and options; determines fair outcomes with shared trade-offs. Capable of defining requirements and collaborating on solutions using technical expertise and a network of experts. Effective communication style for engaging with customers and cross-functional teams; utilizes product knowledge to mitigate risks and drive outcomes. Strong verbal, written, and interpersonal communication skills; able to produce professional and technical reports and conduct presentations. Innovates and integrates new processes or technologies to add significant value; advises on change cost versus benefits and learns new solutions to address complex problems Additional Information Relocation Assistance Provided: Yes

Posted 1 day ago

Apply

0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Avant de postuler à un emploi, sélectionnez votre langue de préférence parmi les options disponibles en haut à droite de cette page. Découvrez votre prochaine opportunité au sein d'une organisation qui compte parmi les 500 plus importantes entreprises mondiales. Envisagez des opportunités innovantes, découvrez notre culture enrichissante et travaillez avec des équipes talentueuses qui vous poussent à vous développer chaque jour. Nous savons ce qu’il faut faire pour diriger UPS vers l'avenir : des personnes passionnées dotées d’une combinaison unique de compétences. Si vous avez les qualités, de la motivation, de l'autonomie ou le leadership pour diriger des équipes, il existe des postes adaptés à vos aspirations et à vos compétences d'aujourd'hui et de demain. Job Summary Fiche de poste : This position evaluates, designs, develops, tests, performs maintenance, and supports UPS technology assets. He/She contributes to the evaluation, design, testing, implementation, maintenance, performance, capacity tuning, and support of third-party infrastructures, applications, and appliances (i.e., transaction, collaboration, communications protocols, application delivery, virtualization, and directory services). This position executes processes to improve the reliability, efficiency, and availability of the systems environment. Responsibilities Serves as a subject matter expert for administration, maintenance, customization, and support of workforce automation tools to increase organizational efficiency. Utilizes basic templates and tools for activities and duties of low risk, minimal impact, low complexity, and scope. Qualifications Bachelor's degree or International equivalent in Computer Science or related discipline - Preferred Prior Knowledge of Windows Operating System Proficient in Microsoft Office Word, PowerPoint, and Excel Excellent verbal and written communication skills Deployment Support and Release position Ability to run reports and perform analytics Identify root cause Ability to develop solutions Facilitate change control process Experience with MDM - Mobile Device Management Run team support for applications using MDM Experience with AirWatch is preferred Strong analytical, organizational, and documentation skills Excellent written and verbal communication skills Ability to work independently Problem solving skills RTE duties as needed Coordinate meetings to ensure alignment among teams Lean Agile Methodology preferred Proficient in Microsoft Office Datasets Excel formulas Importing and exporting CVS files Mandatory Skills - Experience with MDM - Mobile Device Management, Strong analytical, organizational, and documentation skills Excellent written and verbal communication skills Proficient in Microsoft Office Desired Skills - Experience with AirWatch and Lean Agile Methodology are preferred Type De Contrat en CDI Chez UPS, égalité des chances, traitement équitable et environnement de travail inclusif sont des valeurs clefs auxquelles nous sommes attachés.

Posted 1 day ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies