Jobs
Interviews

42 Orchestration Tools Jobs - Page 2

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

6.0 - 10.0 years

0 Lacs

noida, uttar pradesh

On-site

As an experienced DevOps Lead at our company, you will be responsible for managing and optimizing the infrastructure of a high-traffic e-commerce platform hosted on AWS. Your expertise in cloud automation, CI/CD pipelines, monitoring, and security best practices will be crucial in ensuring the seamless and scalable operation of our platform. Your key responsibilities will include designing, implementing, and maintaining AWS infrastructure using best practices, developing and managing CI/CD pipelines for efficient software deployment, ensuring high availability, scalability, and security of the e-commerce platform, monitoring system performance, troubleshooting issues, and implementing proactive solutions, automating infrastructure provisioning using tools like Terraform or CloudFormation, managing Kubernetes clusters and containerized applications (Docker, EKS), optimizing costs and performance of AWS services, implementing security controls and compliance policies (IAM, VPC, WAF, etc.), and collaborating with development teams to streamline DevOps workflows. To excel in this role, you should have at least 6 years of experience in DevOps or cloud engineering, strong expertise in AWS services such as EC2, S3, RDS, Lambda, etc., experience with containerization (Docker, Kubernetes) and orchestration tools, hands-on experience with Infrastructure as Code (Terraform, CloudFormation), proficiency in scripting languages like Bash, Python, or similar, knowledge of CI/CD tools such as Jenkins, GitHub Actions, GitLab CI/CD, etc., experience with monitoring/logging tools like Prometheus, Grafana, ELK Stack, CloudWatch, a strong understanding of security best practices in AWS, experience in handling incident management and disaster recovery, and excellent problem-solving and communication skills. Additional skills that would be beneficial include experience with e-commerce platforms and their scaling challenges, knowledge of serverless architectures (AWS Lambda, API Gateway), and familiarity with database management (RDS, DynamoDB, MongoDB). At our company, Binmile, you will enjoy perks and benefits such as the opportunity to work with the Leadership team, health insurance, and a flexible working structure. We are an Equal Employment Opportunity Employer that celebrates diversity and is committed to building a team that represents various backgrounds, perspectives, and skills.,

Posted 3 weeks ago

Apply

3.0 - 7.0 years

0 Lacs

hyderabad, telangana

On-site

You have a total of 4-6 years of development/design experience with a minimum of 3 years experience in Big Data technologies on-prem and on cloud. You should be proficient in Snowflake and possess strong SQL programming skills. Your role will require strong experience with data modeling and schema design, as well as extensive experience in using Data warehousing tools like Snowflake/BigQuery/RedShift and BI Tools like Tableau/QuickSight/PowerBI (at least one must be a must-have). You must also have experience with orchestration tools like Airflow and transformation tool DBT. Your responsibilities will include implementing ETL/ELT processes and building data pipelines, workflow management, job scheduling, and monitoring. You should have a good understanding of Data Governance, Security and Compliance, Data Quality, Metadata Management, Master Data Management, Data Catalog, as well as cloud services (AWS), including IAM and log analytics. Excellent interpersonal and teamwork skills are essential, along with the experience of leading and mentoring other team members. Good knowledge of Agile Scrum and communication skills are also required. At GlobalLogic, the culture prioritizes caring and inclusivity. Youll join an environment where people come first, fostering meaningful connections with collaborative teammates, supportive managers, and compassionate leaders. Continuous learning and development opportunities are provided to help you grow personally and professionally. Meaningful work awaits you at GlobalLogic, where youll have the chance to work on impactful projects and engage your curiosity and problem-solving skills. The organization values balance and flexibility, offering various career areas, roles, and work arrangements to help you achieve a perfect balance between work and life. GlobalLogic is a high-trust organization where integrity is key, ensuring a safe, reliable, and ethical global environment for all employees. Truthfulness, candor, and integrity are fundamental values upheld in everything GlobalLogic does. GlobalLogic, a Hitachi Group Company, is a trusted digital engineering partner that collaborates with the world's largest and most forward-thinking companies. Leading the digital revolution since 2000, GlobalLogic helps create innovative digital products and experiences, transforming businesses and redefining industries through intelligent products, platforms, and services.,

Posted 3 weeks ago

Apply

2.0 - 6.0 years

0 Lacs

karnataka

On-site

As a Technology Support II team member at JPMorgan Chase, you will be instrumental in maintaining the operational stability, availability, and performance of our production application flows. Your primary responsibilities will include analyzing and troubleshooting production application flows to ensure seamless service delivery, participating in problem management to enhance operational stability and availability, monitoring production environments for anomalies, and communicating effectively with stakeholders to address and resolve issues promptly. You will also be expected to identify trends and provide support for incidents, problems, and changes related to full stack technology systems, applications, or infrastructure. This role may involve providing on-call coverage during weekends to ensure continuous operational support. The ideal candidate for this position should have at least 2 years of experience working with Data/Python applications in a Production environment. Proficiency in programming or scripting language, particularly Python, is required. Experience with containers and container orchestration (such as Kubernetes), orchestration tools (like Control-M), cloud platforms (specifically AWS) with infrastructure provisioning using Terraform, as well as exposure to observability and monitoring tools, will be beneficial. Strong communication and collaboration skills are essential for effective engagement in a fast-paced, dynamic environment. Additionally, preferred qualifications include experience supporting applications on platforms like Databricks, Snowflake, or AWS EMR (with Databricks being preferred), a proactive approach to self-education and evaluation of new technologies, and knowledge of virtualization, cloud architecture, services, and automated deployments.,

Posted 3 weeks ago

Apply

4.0 - 8.0 years

0 Lacs

karnataka

On-site

At PwC, the focus in data and analytics is on leveraging data to drive insights and make informed business decisions. Utilizing advanced analytics techniques to help clients optimize their operations and achieve strategic goals is key. In data analysis at PwC, the emphasis is on utilizing advanced analytical techniques to extract insights from large datasets and drive data-driven decision-making. Skills in data manipulation, visualization, and statistical modeling play a crucial role in supporting clients in solving complex business problems. Candidates with 4+ years of hands-on experience are sought for the position of Senior Associate in supply chain analytics. Successful candidates should possess proven expertise in supply chain analytics across domains such as demand forecasting, inventory optimization, logistics, segmentation, and network design. Additionally, hands-on experience working on optimization methods like linear programming, mixed integer programming, and scheduling optimization is required. Proficiency in forecasting techniques and machine learning techniques, along with a strong command of statistical modeling, testing, and inference, is essential. Familiarity with GCP tools like BigQuery, Vertex AI, Dataflow, and Looker is also necessary. Required skills include building data pipelines and models for forecasting, optimization, and scenario planning, strong SQL and Python programming skills, experience deploying models in a GCP environment, and knowledge of orchestration tools like Cloud Composer (Airflow). Nice-to-have skills consist of familiarity with MLOps, containerization (Docker, Kubernetes), and orchestration tools, as well as strong communication and stakeholder engagement skills at the executive level. The roles and responsibilities of the Senior Associate involve assisting analytics projects within the supply chain domain, driving design, development, and delivery of data science solutions. They are expected to interact with and advise consultants/clients as subject matter experts, conduct analysis using advanced analytics tools, and implement quality control measures for deliverable integrity. Validating analysis outcomes, making presentations, and contributing to knowledge and firm building activities are also part of the role. The ideal candidate should hold a degree in BE / B.Tech / MCA / M.Sc / M.E / M.Tech / Masters Degree / MBA from a reputed institute.,

Posted 3 weeks ago

Apply

8.0 - 12.0 years

0 Lacs

pune, maharashtra

On-site

You should possess a Bachelor's degree in Computer Science, Engineering, or a related field along with at least 8 years of work experience in Data First systems. Additionally, you should have a minimum of 4 years of experience working on Data Lake/Data Platform projects specifically on AWS/Azure. It is crucial to have extensive knowledge and hands-on experience with Data warehousing tools such as Snowflake, BigQuery, or RedShift. Proficiency in SQL for managing and querying data is a must-have skill for this role. You are expected to have experience with relational databases like Azure SQL, AWS RDS, as well as an understanding of NoSQL databases like MongoDB for handling various data formats and structures. Familiarity with orchestration tools like Airflow and DBT would be advantageous. Experience in building stream-processing systems using solutions such as Kafka or Azure Event Hub is desirable. Your responsibilities will include designing and implementing ETL/ELT processes using tools like Azure Data Factory to ingest and transform data into the data lake. You should also have expertise in data migration and processing with AWS (S3, Glue, Lambda, Athena, RDS Aurora) or Azure (ADF, ADLS, Azure Synapse, Databricks). Data cleansing and enrichment skills are crucial to ensure data quality for downstream processing and analytics. Furthermore, you must be capable of managing schema evolution and metadata for the data lake, with experience in tools like Azure Purview for data discovery and cataloging. Proficiency in creating and managing APIs for data access, preferably with experience in JDBC/ODBC, is required. Knowledge of data governance practices, data privacy laws like GDPR, and implementing security measures in the data lake are essential aspects of this role. Strong programming skills in languages like Python, Scala, or SQL are necessary for data engineering tasks. Additionally, experience with automation and orchestration tools, familiarity with CI/CD practices, and the ability to optimize data storage and retrieval for analytical queries are key requirements. Collaboration with the Principal Data Architect and other team members to align data solutions with architectural and business goals is crucial. As a lead, you will be responsible for critical system design changes, software projects, and ensuring timely project deliverables. Collaboration with stakeholders to translate business needs into efficient data infrastructure systems is a key aspect of this role. Your ability to review design proposals, conduct code review sessions, and promote best practices is essential. Experience in an Agile model, delivering quality deliverables on time, and translating complex requirements into technical solutions are also part of your responsibilities.,

Posted 3 weeks ago

Apply

6.0 - 10.0 years

0 Lacs

karnataka

On-site

You are an experienced backend developer with 5.5+ years of total experience. You have extensive knowledge in back-end development using Java 8 or higher, Spring Framework (Core/Boot/MVC), Hibernate/JPA, and Web flux. Your expertise includes a good understanding of Data Structures, Object-Oriented Programming, and Design Patterns. You are well-versed in REST APIs and Microservices Architecture and proficient in working with Relational and NoSQL databases, preferably PostgreSQL and MongoDB. Experience with CI/CD tools such as Jenkins, GOCD, or CircleCI is essential for you. You are familiar with test automation tools like xUnit, Selenium, or JMeter, and have hands-on experience with Apache Kafka or similar messaging technologies. Exposure to automated testing frameworks, performance testing tools, containerization tools like Docker, orchestration tools like Kubernetes, and cloud platforms, preferably Google Cloud Platform (GCP) is required. You have a strong understanding of UML and design patterns, excellent problem-solving skills, and a continuous improvement mindset. Effective communication and collaboration with cross-functional teams are key strengths of yours. Your responsibilities include writing and reviewing high-quality code, thoroughly understanding functional requirements, and analyzing clients" needs. You should be able to envision the overall solution for defined functional and non-functional requirements, determine and implement design methodologies and tool sets, and lead/support UAT and production rollouts. Creating, understanding, and validating WBS and estimated effort for a given module/task, addressing issues promptly, giving constructive feedback to team members, troubleshooting and resolving complex bugs, and providing solutions during code/design reviews are part of your daily tasks. Additionally, you are expected to carry out POCs to ensure that suggested design/technologies meet the requirements. You hold a Bachelors or Masters degree in computer science, Information Technology, or a related field.,

Posted 3 weeks ago

Apply

6.0 - 10.0 years

0 Lacs

hyderabad, telangana

On-site

The successful candidate for the Full Stack Developer position at U.S. Pharmacopeial Convention (USP) will have a demonstrated understanding of the organization's mission and a commitment to excellence through inclusive and equitable behaviors and practices. They should possess the ability to quickly build credibility with stakeholders. As a Full Stack Developer, you will be part of the Digital & Innovation group at USP, responsible for building innovative digital products using cutting-edge cloud technologies. Your role will be crucial in creating an amazing digital experience for customers. Your responsibilities will include building scalable applications and platforms using the latest cloud technologies, ensuring systems are regularly reviewed and upgraded based on governance principles and security policies. You will participate in code reviews, architecture discussions, and agile development processes to maintain high-quality, maintainable, and scalable code. Additionally, you will provide technical guidance and mentorship to junior developers and team members, as well as document and communicate technical designs, processes, and solutions to both technical and non-technical stakeholders. To qualify for this role, you should have a Bachelor's or Master's degree in Computer Science, Engineering, or a related field, along with 6-10 years of experience in software development with a focus on cloud computing. Strong knowledge of cloud platforms such as AWS, Azure, and Google Cloud, as well as services like compute, storage, networking, and security, is essential. Experience in leading and mentoring junior software developers, extensive knowledge of Java spring boot applications, and proficiency in programming languages like Python or Node.js are also required. Moreover, you should have experience with AWS/Azure services, containerization technologies like Docker and Kubernetes, front-end technologies like React.js/Node.js, and microservices. Familiarity with cloud architecture patterns, best practices, security principles, data pipelines, and ETL tools is a plus. Experience in leading initiatives related to continuous improvement or new technology implementations, strong analytical and problem-solving skills, and the ability to manage multiple projects and priorities in a fast-paced environment are also desirable attributes. Additional preferences include experience with scientific chemistry nomenclature, pharmaceutical datasets, knowledge graphs, and the ability to explain complex technical issues to a non-technical audience. Strong communication skills, the ability to work independently, make tough decisions, and prioritize tasks are essential for this role. As a Full Stack Developer at USP, you will have supervisory responsibilities and will be eligible for a comprehensive benefits package that includes company-paid time off, healthcare options, and retirement savings. USP is an independent scientific organization dedicated to developing quality standards for medicines, dietary supplements, and food ingredients in collaboration with global health and science authorities. The organization values inclusivity, mentorship, and professional growth, emphasizing Diversity, Equity, Inclusion, and Belonging in its mission to ensure quality in health and healthcare worldwide.,

Posted 4 weeks ago

Apply

2.0 - 6.0 years

0 Lacs

karnataka

On-site

Propel operational success with your expertise in technology support and a commitment to continuous improvement. As a Technology Support II team member within JPMorgan Chase, you will play a vital role in ensuring the operational stability, availability, and performance of our production application flows. You will be responsible for troubleshooting, maintaining, identifying, escalating, and resolving production service interruptions for all internally and externally developed systems, thereby supporting a seamless user experience and fostering a culture of continuous improvement. You will analyze and troubleshoot production application flows to ensure end-to-end application or infrastructure service delivery supporting the business operations of the firm. Your contributions will be instrumental in improving operational stability and availability through participation in problem management. Additionally, you will monitor production environments for anomalies and address issues utilizing standard observability tools. Your role will involve assisting in the escalation and communication of issues and solutions to the business and technology stakeholders. Furthermore, you will play a key role in identifying trends and assisting in the management of incidents, problems, and changes in support of full stack technology systems, applications, or infrastructure. There may be instances where you will be required to provide on-call coverage during weekends. **Job Responsibilities:** - Analyze and troubleshoot production application flows to ensure end-to-end application or infrastructure service delivery supporting the business operations of the firm. - Improve operational stability and availability through participation in problem management. - Monitor production environments for anomalies and address issues utilizing standard observability tools. - Assist in the escalation and communication of issues and solutions to the business and technology stakeholders. - Identify trends and assist in the management of incidents, problems, and changes in support of full stack technology systems, applications, or infrastructure. - May require the role to provide on-call coverage during weekends. **Required qualifications, capabilities, and skills:** - Possess 2+ years of experience, ideally working with Data/Python applications in a production environment. - Experience in a programming or scripting language (Python). - Experience working with containers and container orchestration (Kubernetes). - Experience working with orchestration tools (Control-M). - Experience with cloud platforms (AWS), ideally provisioning infrastructure using Terraform. - Exposure to observability and monitoring tools and techniques. - Good communication and collaboration skills, with the ability to work effectively in a fast-paced, dynamic environment. **Preferred qualifications, capabilities, and skills:** - Significant advantage to have experience supporting applications on platforms such as Databricks, Snowflake, or AWS EMR (Databricks preferred). - Actively self-educates, evaluate new technology, and recommend suitable ones. - Knowledge of virtualization, cloud architecture, services, and automated deployments.,

Posted 4 weeks ago

Apply

4.0 - 8.0 years

0 Lacs

chennai, tamil nadu

On-site

Join us as a Data Engineer at Barclays, where you will spearhead the evolution of our infrastructure and deployment pipelines, driving innovation and operational excellence. You will harness cutting-edge technology to build and manage robust, scalable and secure infrastructure, ensuring seamless delivery of our digital solutions. To be successful as a Data Engineer, you should have experience with hands-on experience in Pyspark and a strong knowledge of Dataframes, RDD, and SparkSQL. You should also have hands-on experience in developing, testing, and maintaining applications on AWS Cloud. A strong hold on AWS Data Analytics Technology Stack (Glue, S3, Lambda, Lake formation, Athena) is essential. Additionally, you should be able to design and implement scalable and efficient data transformation/storage solutions using Snowflake. Experience in data ingestion to Snowflake for different storage formats such as Parquet, Iceberg, JSON, CSV, etc., is required. Familiarity with using DBT (Data Build Tool) with Snowflake for ELT pipeline development is necessary. Advanced SQL and PL SQL programming skills are a must. Experience in building reusable components using Snowflake and AWS Tools/Technology is highly valued. Exposure to data governance or lineage tools such as Immuta and Alation is an added advantage. Knowledge of Orchestration tools such as Apache Airflow or Snowflake Tasks is beneficial, and familiarity with Abinitio ETL tool is a plus. Some other highly valued skills may include the ability to engage with stakeholders, elicit requirements/user stories, and translate requirements into ETL components. A good understanding of infrastructure setup and the ability to provide solutions either individually or working with teams is essential. Knowledge of Data Marts and Data Warehousing concepts, along with good analytical and interpersonal skills, is required. Implementing Cloud-based Enterprise data warehouse with multiple data platforms along with Snowflake and NoSQL environment to build data movement strategy is also important. You may be assessed on key critical skills relevant for success in the role, such as risk and controls, change and transformation, business acumen, strategic thinking, digital and technology, as well as job-specific technical skills. The role is based out of Chennai. Purpose of the role: To build and maintain the systems that collect, store, process, and analyze data, such as data pipelines, data warehouses, and data lakes to ensure that all data is accurate, accessible, and secure. Accountabilities: - Build and maintenance of data architectures pipelines that enable the transfer and processing of durable, complete, and consistent data. - Design and implementation of data warehouses and data lakes that manage the appropriate data volumes and velocity and adhere to the required security measures. - Development of processing and analysis algorithms fit for the intended data complexity and volumes. - Collaboration with data scientists to build and deploy machine learning models. Analyst Expectations: - Meet the needs of stakeholders/customers through specialist advice and support. - Perform prescribed activities in a timely manner and to a high standard which will impact both the role itself and surrounding roles. - Likely to have responsibility for specific processes within a team. - Lead and supervise a team, guiding and supporting professional development, allocating work requirements, and coordinating team resources. - Demonstrate a clear set of leadership behaviours to create an environment for colleagues to thrive and deliver to a consistently excellent standard. - Manage own workload, take responsibility for the implementation of systems and processes within own work area and participate in projects broader than the direct team. - Execute work requirements as identified in processes and procedures, collaborating with and impacting on the work of closely related teams. - Provide specialist advice and support pertaining to own work area. - Take ownership for managing risk and strengthening controls in relation to the work you own or contribute to. - Deliver work and areas of responsibility in line with relevant rules, regulations, and codes of conduct. - Maintain and continually build an understanding of how all teams in the area contribute to the objectives of the broader sub-function, delivering impact on the work of collaborating teams. - Continually develop awareness of the underlying principles and concepts on which the work within the area of responsibility is based, building upon administrative/operational expertise. - Make judgements based on practice and previous experience. - Assess the validity and applicability of previous or similar experiences and evaluate options under circumstances that are not covered by procedures. - Communicate sensitive or difficult information to customers in areas related specifically to customer advice or day-to-day administrative requirements. - Build relationships with stakeholders/customers to identify and address their needs. All colleagues will be expected to demonstrate the Barclays Values of Respect, Integrity, Service, Excellence, and Stewardship our moral compass, helping us do what we believe is right. They will also be expected to demonstrate the Barclays Mindset to Empower, Challenge, and Drive the operating manual for how we behave.,

Posted 4 weeks ago

Apply

10.0 - 14.0 years

0 Lacs

pune, maharashtra

On-site

As an Associate Architect/Architect (Networking) with 10/12 years of experience in the IT infrastructure industry, your primary responsibilities will include: - Leading discussions on technical requirements, estimations, and proposals. - Resolving technical blockers/gaps for the engineering team to ensure timely deliverables. - Developing strategic relationships with customers to provide trusted advice on various offerings and solutions. - Providing pre-sales technical support to Sales and Engineering teams. - Supporting evaluations of offerings and technical proofs of concepts. - Responding to customer and partner inquiries, including requests for proposal (RFPs) and requests for information (RFIs). - Participating in technical discussions with customers and 3rd parties to define end-to-end solutions required by customers. - Guiding and reviewing technical offer deliverables such as SoW. - Driving and documenting overall deal solution strategy, interfacing with Engineering & Services for optimal scoping & cost structure. - Leading PoC/Trial proposals, including defining trial scope, architecture, use cases, and obtaining necessary management approvals. - Designing simplified solutions for complex problems in Networking, Cloud, and Telecom domains, considering performance, scale, and resilience. - Documenting product architecture, requirements, and concepts in clear, concise specifications. - Explaining technical concepts to non-technical audiences. - Collaborating effectively in a team environment to provide the best solutions. - Adapting to new technologies in a dynamic environment. Mandatory Skills: - Minimum 10/12 years of experience in the IT infrastructure industry. - Experience in networking and/or virtualization. - Experience in SDN/SDWAN solution development. - Experience in designing highly available systems with redundancy. - Experience in modern software architectures focusing on modularity and reusability. - Familiarity with enterprise solutions and architectures including cloud, virtualization, storage, and clustering. - Good knowledge of Network Management Systems. - Experience with major Public Cloud infrastructures (AWS, GCP, Azure). - Knowledge of Network Automation, Intent-based Networking, and Orchestration Tools. - Expert-level experience in Routing protocols and Switching technologies. - Expert level experience in Cisco/Aruba/Arista Data Center and Campus Switching and routing. - Hands-on programming experience in languages like Java, Python, GoLang, C, C++. - In-depth knowledge of REST API-based design and programming. - Expertise in mentoring and driving teams in the development of enterprise applications and networking features. - Strong communication and presentation skills. Preferred Technical and Professional Expertise: - Experience with Docker and Kubernetes. - Professional certifications such as AWS Solution Architect, GCP Associate Cloud Engineer/Architect, Azure Network Engineer Associate/Azure Solutions Architect Expert, CCNA, CI/CD pipeline, DPDK, CNI, SmartNIC, SONiC. ,

Posted 1 month ago

Apply

5.0 - 9.0 years

0 Lacs

ahmedabad, gujarat

On-site

You will be responsible for: - Delivering complex Java-based solutions and preferably having experience in Fintech product development - Demonstrating a strong understanding of microservices architectures and RESTful APIs - Developing cloud-native applications and being familiar with containerization and orchestration tools such as Docker and Kubernetes - Having experience with at least one major cloud platform like AWS, Azure, or Google Cloud, with knowledge of Oracle Cloud being preferred - Utilizing DevOps tools like Jenkins and GitLab CI/CD for continuous integration and deployment - Understanding monitoring tools like Prometheus and Grafana, as well as event-driven architecture and message brokers like Kafka - Monitoring and troubleshooting the performance and reliability of Cloud-native applications in production environments - Possessing excellent verbal and written communication skills and the ability to collaborate effectively within a team. About Us: Oracle is a global leader in cloud solutions that leverages cutting-edge technology to address current challenges. With over 40 years of experience, we partner with industry leaders across various sectors, maintaining integrity amidst ongoing change. We believe in fostering innovation by empowering all individuals to contribute, striving to build an inclusive workforce that offers opportunities for everyone. Oracle careers provide a gateway to international opportunities where a healthy work-life balance is encouraged. We provide competitive benefits, including flexible medical, life insurance, and retirement options to support our employees. Furthermore, we promote community engagement through volunteer programs. We are dedicated to integrating people with disabilities into all phases of the employment process. If you require assistance or accommodation due to a disability, please contact us at accommodation-request_mb@oracle.com or call +1 888 404 2494 in the United States.,

Posted 1 month ago

Apply

5.0 - 9.0 years

0 Lacs

haryana

On-site

As an AI Decision Science Consultant at Accenture Strategy & Consulting, specifically in the Global Network - Data & AI practice under the CMT Software & Platforms team, you will play a pivotal role in helping clients leverage analytics to achieve high performance and make better decisions. In collaboration with onsite counterparts, you will drive the development and delivery of Data & AI solutions for SaaS and PaaS clients. You will have the opportunity to work with a diverse team of talented professionals experienced in leading statistical tools and methods. From gathering business requirements to developing and testing AI algorithms tailored to address specific business challenges, you will be involved in the end-to-end process of delivering AI solutions. Your role will also include monitoring project progress, managing risks, and fostering positive client relationships by ensuring alignment between project deliverables and client expectations. In this role, you will mentor and guide a team of AI professionals, promoting a culture of collaboration, innovation, and excellence. Continuous learning and professional development will be supported by Accenture, enabling you to enhance your skills and certifications in SaaS & PaaS technologies. As part of the Data & AI practice, you will be at the forefront of leveraging AI technologies such as Generative AI frameworks and statistical models to drive business performance improvement initiatives. To excel in this role, you should possess a bachelor's or master's degree in computer science, engineering, data science, or a related field. With at least 5 years of experience in working on AI projects, you should have hands-on exposure to AI technologies, statistical packages, and machine learning techniques. Proficiency in programming languages such as R, Python, Java, SQL, and experience in working with cloud platforms like AWS, Azure, or Google Cloud will be crucial. Your strong analytical, problem-solving, and project management skills will be essential in navigating complex issues and delivering successful outcomes. Excellent communication and interpersonal skills will enable you to engage effectively with clients and internal stakeholders, while your ability to work with large datasets and present insights will drive informed decision-making. Join us at Accenture Strategy & Consulting and be part of a dynamic team dedicated to helping clients unlock new business opportunities through Data & AI solutions.,

Posted 1 month ago

Apply

5.0 - 9.0 years

0 Lacs

karnataka

On-site

You should have experience in understanding and translating data, analytic requirements, and functional needs into technical requirements while collaborating with global customers. Your responsibilities will include designing cloud-native data architectures to support scalable, real-time, and batch processing. You will be required to build and maintain data pipelines for large-scale data management in alignment with data strategy and processing standards. Additionally, you will define strategies for data modeling, data integration, and metadata management. Your role will also involve having strong experience in database, data warehouse, and data lake design and architecture. You should be proficient in leveraging cloud platforms such as AWS, Azure, or GCP for data storage, compute, and analytics services. Experience in database programming using various SQL flavors is essential. Moreover, you will need to implement data governance frameworks encompassing data quality, lineage, and cataloging. Collaboration with cross-functional teams, including business analysts, data engineers, and DevOps teams, will be a key aspect of this role. Familiarity with the Big Data ecosystem, whether on-premises (Hortonworks/MapR) or in the Cloud, is required. You should be able to evaluate emerging cloud technologies and suggest enhancements to the data architecture. Proficiency in any orchestration tool like Airflow or Oozie for scheduling pipelines is preferred. Hands-on experience in utilizing tools such as Spark Streaming, Kafka, Databricks, and Snowflake is necessary. You should be adept at working in an Agile/Scrum development process and optimizing data systems for cost efficiency, performance, and scalability.,

Posted 1 month ago

Apply

6.0 - 8.0 years

15 - 25 Lacs

Bengaluru

Work from Office

Role: Cloud Developer Experience: 6-8 yrs Location: Bangalore Required Skillset Proficiency in Node.js with at least 2-3 years of experience. Ability to write basic code constructs, such as proper for loops. Candidates do not need to know all the tools and tasks listed in the job description but should be capable of problem-solving and have experience with similar services. Experience with AWS services, specifically Lambda and API Gateway (also S3). Familiarity with NoSQL databases like DynamoDB and MongoDB The job involves developing pipelines, working on architecture, deploying services, and potentially building infrastructure using tools like Terraform. The role includes end-to-end responsibilities for APIs, from building infrastructure to maintaining it. Emphasis on practical experience and problem-solving skills over specific tool knowledge. AWS Lambda/S3 (For Storage) Backend NodeJS Orchestration Tools StepFunctions OR Apache Airflow, Events Database - SQL Scripting Python – Good if the resource has. Interested candidates can send resume to jegadheeswari.m@spstaffing.in or reach me @9566720836

Posted 2 months ago

Apply

5.0 - 7.0 years

0 Lacs

Bengaluru / Bangalore, Karnataka, India

On-site

Introduction A career in IBM Software means youll be part of a team that transforms our customers challenges into solutions. Seeking new possibilities and always staying curious, we are a team dedicated to creating the worlds leading AI-powered, cloud-native software solutions for our customers. Our renowned legacy creates endless global opportunities for our IBMers, so the door is always open for those who want to grow their career. IBMs product and technology landscape includes Research, Software, and Infrastructure. Entering this domain positions you at the heart of IBM, where growth and innovation thrive. We are looking for a skilled Infrastructure Operations Engineer with expertise in Linux systems, networking, automation, Kubernetes, and orchestration tools. The ideal candidate will have hands-on experience managing Linux environments and automating infrastructure tasks using tools such as Ansible, Jenkins, and scripting. This role will be responsible for ensuring system reliability, automating repetitive tasks, and supporting deployment and maintenance of applications running on the platform. Your role and responsibilities As a Site Reliability Engineer, you will work in an agile, collaborative environment to build, deploy, configure, and maintain systems for the IBM client business. In this role, you will lead the problem resolution process for our clients, from analysis and troubleshooting, to deploying the latest software updates & fixes. We are looking for a dynamic Site Reliability Engineer to join our Cloud IaaS Team in Bengaluru, India, who is responsive to market needs, to deliver value to our clients in a fast-changing cloud landscape. The SRE team dedicated to ensuring that the IBM Cloud is at the forefront of cloud technology, from data centre design, Storage & Network architecture and compute clusters to flexible infrastructure services. We are building IBMs next generation cloud platform to deliver performance and predictability for our customers most demanding workloads, at global scale and with leadership efficiency, resiliency and security. It is an exciting time, and as a team we are driven by this incredible opportunity to thrill our clients. Manage and maintain Linux-based systems across multiple environments. Automate provisioning, configuration, and deployment tasks using tools like Ansible and Jenkins Design, implement, and manage deployment of containerized applications using Kubernetes and docker. Monitor and troubleshoot system performance, network issues, and applications to ensure optimal uptime and efficiency. Harden the server from scratch using baseboard management controller (BMC)s. Implement and maintain security best practices, ensuring compliance with company policies. Proactively identify potential improvements to processes and systems. Analyze and fix network & DNS issues in the environment. Upgrade Kubernetes worker nodes and packages without interrupting the cluster. Maintain benchmarking standards on systems to ensure continuous compliance. Participate in on-call rotation to support critical infrastructure issues. Required education Bachelors Degree Required technical and professional expertise In addition to your strong verbal and written communication skills, youll possess.... Bachelors degree in computer science, Information Technology, or a related field (or equivalent work experience) 5+ years of experience managing Linux systems in a production environment. Strong hands-on expertise with automation tools such as Ansible and Jenkins. Hands-on experience with Kubernetes and containerization (e.g., Docker). Familiarity with CI/CD pipelines and DevOps methodologies.

Posted 2 months ago

Apply

6.0 - 11.0 years

15 - 25 Lacs

Bengaluru

Remote

Location - Remote Experience - 6-12 years Immediate Joiners preferred Required Qualifications: Bachelors degree in Computer Science, Information Systems, or a related field. 35 years of experience in data engineering, cloud architecture, or Snowflake administration. Hands-on experience with Snowflake features: Snowpipe, Streams, Tasks, External Tables, and Secure Data Sharing. Proficiency in SQL , Python , and data movement tools (e.g., AWS CLI, Azure Data Factory, Google Cloud Storage Transfer). Experience with data pipeline orchestration tools such as Apache Airflow , dbt , or Informatica . Strong understanding of cloud storage services (S3, Azure Blob, GCS) and working with external stages. Familiarity with network security , encryption , and data compliance best practices

Posted 2 months ago

Apply

5.0 - 10.0 years

0 Lacs

Bengaluru / Bangalore, Karnataka, India

On-site

Department Description The ERP/JD Edwards DevOps division provides continuous support to Product Development, Quality and Product Management teams. Additionally, the team continuously strives to develop new standards and procedures to provide quality guidance and methods. Role Objective: In this role you will work closely with Development, Quality and Release engineering teams to support, maintain & configure environments includes building & deploying applications & tools, configuring, product mastering & delivery, patching and troubleshooting across various operating systems and database servers. In addition to this, you will also be responsible for supporting ongoing automation projects towards developing a provisioning system and also responsible for framework enhancements. Also, should be able tohandle complex projects and to train and mentor junior team members. Role Summary: Qualifications and Special skills Qualification Possess B.E/ B.Tech/ M.E/ M.Tech/ MCA degree in a field relevant to functional area Possess 5-10 years JD Edwards CNC experience Demonstrate excellent knowledge of functional area Possess excellent written and oral communication skills Self-motivated with the ability to work well both in groups and independently. Attention to details & strong analytical and engineering skills. Demonstrate good project management and decision-making skills Special Skills Extensive knowledge on SDLC, software release processes Hands on experience of 2-6 yrs in automation tool like PL/SQL, Python, Ruby, bash, groovy. Hands-on Technical Professional with demonstrated experience in architecting automation, CI/CD and application performance management tools. Experience on working with Windows and Unix/Linux platforms Java development knowledge is desired SQL and RDBMS knowledge desired REST Web Services knowledge is desired. Cloud experience is a plus Working experience of orchestration tools is a plus. Experience on chef, docker, or puppet, Rundeck,Jenkins, & Artifactory Agile/SCRUM process development methodology knowledge/experience is a plus Career Level - IC3 Responsibilities Understanding the existing processes for provisioning environments and working on automating the same. Setting up Web Servers, Database servers and clients, third party tools etc Patching environments Work closely with Engineering teams to create test and provisioning strategy Maintenance of hosted environments Test stack upgrades, includes OS, DB, Web Servers etc Handle service requests and troubleshooting of environments Maintain documentation. Maintenance of installation scripts Automation Script development: developing scripts for various applications using tools like Perl, Shell and scripting in an agile development ecosystem. Automation Script maintenance: updating scripts as new builds and new functionality is added to the application Automation planning: Participating in formulating an overall strategy, designing the framework and functions to support automation Extend the automation framework as necessary Doing research and adopt readily available open source tools to accomplish the goal of automation. Operate both independently and as part of a team, often as the primary contact for Engineering teams Cooperate with others within and outside group who impact work processes to achieve group objectives Career Level - IC3

Posted 2 months ago

Apply
Page 2 of 2
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies