Jobs
Interviews

3919 Bitbucket Jobs - Page 21

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

5.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Job Title* DevOps Specialist – Toolchain & Environment Support Job Description: DevOps Engineer to provide leadership and technical expertise in collaboration with agile development teams, internal and external partners to facilitate and enable automated software and data configuration, integration, and deployment in end-to-end environments. Requires proven track record in DevOps functions and tools used to achieve highly automated Continuous Integration (CI), Deployment (CD) and Testing throughout the products lifecycle. Should be hands on and able to successfully deliver on time without defects. Ability to achieve subject matter expertise quickly on new applications is needed. This role requires active collaboration with software engineers and teams to help them mature their DevOps processes & tool usage as well as troubleshoot and resolve issues in non-prod and prod environments related to the DevOps tool chain. Responsibilities & Requirements : Should have ability to understand various / complex application architecture to support setting up Continuous Integration (CI), Continuous Deployment (CD) and Continuous Testing. 5+ years of hands-on DevOps experience for following functions (tools): Version Control (GIT/Bitbucket), Build Orchestration (Jenkins), Code Quality (SonarQube and Unit Testing frameworks), Artifact Management (Artifactory) and Deployment (Ansible). Experience with Database Build & Deployment using Datical and Release Orchestration using XLRelease would be an added advantage. Hands-on experience in application development and operating environments including Linux/Windows Server, Java, Python, Oracle/SQL Server, Web UI, JBoss/Tomcat/Apache Webserver/WebLogic. Should have very strong analysis and problem solving ability. Must have excellent documentation skills, software design, build and test skills. Proven track record in project delivery in an agile environment; experience with JIRA would be an added advantage. Troubleshoot and resolve issues in non-prod and prod environments related to the DevOps tools. Should be willing to stretch and overlap well with global teams. Experience 5+ years

Posted 1 week ago

Apply

6.0 years

0 Lacs

Gurugram, Haryana, India

On-site

At Macquarie Group, we are committed to empowering people to innovate and invest for a better future. Our global support team plays a crucial role in ensuring our business operates smoothly and efficiently. We are currently seeking a talented and driven DevOps Engineer with 6+ years of experience to join our Tooling and Visualisation domain – Power BI and Alteryx platforms. At Macquarie, our advantage is bringing together diverse people and empowering them to shape all kinds of possibilities. We are a global financial services group operating in 31 markets and with 56 years of unbroken profitability. You’ll be part of a friendly and supportive team where everyone - no matter what role - contributes ideas and drives outcomes. What role will you play? As a DevOps Engineer of Tooling and Visualisation domain (Power BI and Alteryx), you will be a key member of our global support team. You will be responsible for the implementation, and maintenance of our Power BI and Alteryx infrastructure, ensuring high availability, performance, and security. You will work closely with various stakeholders to provide technical support and drive continuous improvement initiatives. Additionally, you will be designing, deploying, and maintaining Power BI and Alteryx infrastructure to monitor and optimize system performance, ensuring high availability and reliability and troubleshoot to resolve technical issues related to Power BI and Alteryx platforms. What You Offer 2+ years Hands on experience on Windows Server 2016/2019 servers. 4+ years Proven hands-on Experience on AWS Cloud (Cloud formation, EC2, EBS, S3, EKS), MS Azure, Private Cloud and Hybrid Cloud infrastructure. Proven PowerShell & Python knowledge - Intermediate Level, including exposure to Automation of Manual repetitive admin tasks and CI/CD). Proven experience with APIs, building system integrations and Automations. Proven experience on Dev tools: Bitbucket, GIT, Splunk, Grafana Good to have experience over SQL basic queries. Good to have Linux and Bash experience Experience with setting up, administering, and maintaining PowerBI and/or Alteryx environments is an advantage. We love hearing from anyone inspired to build a better future with us, if you're excited about the role or working at Macquarie we encourage you to apply. About Technology Technology enables every aspect of Macquarie, for our people, our customers and our communities. We’re a global team that is passionate about accelerating the digital enterprise, connecting people and data, building platforms and applications and designing tomorrow’s technology solutions. Our commitment to diversity, equity and inclusion We are committed to fostering a diverse, equitable and inclusive workplace. We encourage people from all backgrounds to apply and welcome all identities, including race, ethnicity, cultural identity, nationality, gender (including gender identity or expression), age, sexual orientation, marital or partnership status, parental, caregiving or family status, neurodiversity, religion or belief, disability, or socio-economic background. We welcome further discussions on how you can feel included and belong at Macquarie as you progress through our recruitment process. Our aim is to provide reasonable adjustments to individuals who may need support during the recruitment process and through working arrangements. If you require additional assistance, please let us know in the application process.

Posted 1 week ago

Apply

4.0 - 8.0 years

0 Lacs

Gurugram, Haryana, India

On-site

DevLabs Technology is looking for below position on contractual basis, seeking your support. Role : DevOps Senior Engineer. Location : Pune Required Past Experience: ● 4 to 8 years of demonstrated relevant experience deploying, configuring and supporting public cloud infrastructure (GCP as primary), IaaS and PaaS. ● Experience in configuring and managing the GCP infrastructure environment components ● Foundation components Networking (VPC, VPN, Interconnect, Firewall and Routes), IAM, Folder Structure, Organization Policy, VPC Service Control, Security Command Centre, etc. ● Application Components: BigQuery, Cloud Composer, Cloud Storage, Google Kubernetes Engine (GKE), Compute Engine, Cloud SQL, Cloud Monitoring, Dataproc, Data Fusion, Big Table, Dataflow, etc. ● Operational Components Audit Logs, Cloud Monitoring, Alerts, Billing Exports, etc. ● Security Components: KMS, Secrets Manager, etc. ● Experience with infrastructure automation using Terraform. ● Experience in designing and implementing CI/CD pipelines with Cloud Build, Jenkins, GitLab, Bitbucket Pipelines, etc., and source code management tools like Git. ● Experience with scripting Shell Scripting and Python Required Skills and Abilities: ● Mandatory Skills GCP Networking & IAM, Terraform, Shell Scripting/Python Scripting, CI/CD Pipelines ● Secondary Skills: Composer, BigQuery, GKE, Dataproc, GCP Networking ● Good To Have – Certifications in any of the following: Cloud DevOps Engineer, Cloud Security Engineer, Cloud Network Engineer ● Good verbal and written communication skills. ● Strong Team Player

Posted 1 week ago

Apply

8.0 - 11.0 years

35 - 37 Lacs

Kolkata, Ahmedabad, Bengaluru

Work from Office

Dear Candidate, Were seeking a Tech Lead to drive project architecture and mentor engineers. Key Responsibilities: Lead technical design and reviews. Guide junior developers and ensure code quality. Collaborate with PMs to deliver on business goals. Required Skills & Qualifications: 5+ years of hands-on software development. Excellent problem-solving and communication skills. Proven ability to lead cross-functional teams. Note: If interested, please share your updated resume and preferred time for a discussion. If shortlisted, our HR team will contact you. Kandi Srinivasa Delivery Manager Integra Technologies

Posted 1 week ago

Apply

5.0 - 8.0 years

10 - 15 Lacs

Hyderabad

Work from Office

Minimum 7 Years of Experience in Salesforce Sales Cloud, Service Cloud, Communities implementation Salesforce Technical Skills: Lightning experience, LWC (Lightning Web Component), Apex, Triggers, Visual Force, Lightning Aura components Strong experience in Salesforce configuration and customization Strong experience in integrating Salesforce with external systems Strong experience in Agile way of working DevOps practitioner with Bitbucket, Git and CI/CD experience Good understanding and exposure to Salesforce DX Proven knowledge of business process and its KPIs, including best practices to support the relevant business processes Good Communication, analytical & problem solving skills Configure and customize solutions using Salesforce platform to support critical business functions and meet project objectives and client requirements Ensuring Salesforce best practices are followed in configuring and customizing the application Provide detailed level of effort estimates for proposed solutions Articulate the benefits and risks to a solutions feasibility and functionality, Develop, test, and document working custom development, integrations, and data migration elements of a Salesforce implementation Follow and understand Salesforce product and technical capabilities resulting from product releases and acquisitions Communicate with project manager, scrum master, clients and other developers to design cohesive project strategies and ensure effective collaboration throughout all phases of development, testing and deployment Interact directly with clients, managers and end users as necessary to analyze project objectives and capability requirements, including specifications for user interfaces, customized applications and interactions with internal Salesforce instances Desired Certifications: Salesforce AppBuilder Platform Developer I Platform Developer 2 Sales Cloud Service Cloud Mandatory Skills: Salesforce Platform Development. Experience:5-8 Years.

Posted 1 week ago

Apply

4.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Role: Data Engineer Location: Hyderabad Contract duration:12+months, likely to be extended Primary Skills : Strong in Python programming, Pyspark queries, AWS (3 roles) Secondary Skill : Palantir Responsibilities • Develop and enhance data-processing, orchestration, monitoring, and more by leveraging popular open-source software, AWS, and GitLab automation. • Collaborate with product and technology teams to design and validate the capabilities of the data platform • Identify, design, and implement process improvements: automating manual processes, optimizing for usability, re-designing for greater scalability • Provide technical support and usage guidance to the users of our platform’s services. • Drive the creation and refinement of metrics, monitoring, and alerting mechanisms to give us the visibility we need into our production services. Qualifications • Experience building and optimizing data pipelines in a distributed environment • Experience supporting and working with cross-functional teams • Proficiency working in Linux environment • 4+ years of advanced working knowledge of SQL, Python, and PySpark PySpark queries --- MUST • Knowledge on Palantir • Experience using tools such as: Git/Bitbucket, Jenkins/CodeBuild, Code Pipeline • Experience with platform monitoring and alerts tools

Posted 1 week ago

Apply

4.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Data Engineer+Python+pyspark & Palantir Location: Hyderabad contract duration: 12+ months Technical Skills: Responsibilities • Develop and enhance data-processing, orchestration, monitoring, and more by leveraging popular open-source software, AWS, and GitLab aut omation. • Collaborate with product and technology teams to design and validate the capabilities of the data platform • Identify, design, and implement process improvements: automating manual processes, optimizing for usability, re-designing for greater scalability • Provide technical support and usage guidance to the users of our platform’s services. • Drive the creation and refinement of metrics, monitoring, and alerting mechanisms to give us the visibility we need into our production services. Qualifications • Experience building and optimizing data pipelines in a distributed environment • Experience supporting and working with cross-functional teams • Proficiency working in Linux environment • 4+ years of advanced working knowledge of SQL, Python, and PySpark PySpark queries --- MUST • Knowledge on Palantir • Experience using tools such as: Git/Bitbucket, Jenkins/CodeBuild, Code Pipeline • Experience with platform monitoring and alerts tools interested candidates please share me your updated CV to mounika.polsani@tekgence.com

Posted 1 week ago

Apply

9.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Location Name: NR Trident Tech Park Job Purpose Bajaj Finserv Web is a critical component of the company’s omnipresence strategy. You will be working with India’s largest NBFC’s web technology stack, encompassing over 40 business lines and 230+ features, with nearly 500 million traffic and managing over 30,000 webpages. It is an integrated platform offering a portfolio of products covering payments, cards, wallets, loans, deposits, mutual funds, and loans on lifestyle products, ranging from consumer durables to home furnishings. The Technical Architect will lead a major implementation project, collaborating with various POD teams to ensure timely delivery and utilizing technologies like AEM, frontend frameworks, AWS/Azure, and DevOps, while focusing on customer segmentation and personalization. Duties And Responsibilities Technology Architecture and Roadmap Create a robust Architecture for the new Web Platform looking at non-Functional aspects including Security, Performance, Scalability and Availability Lead, Define, maintain, and own platform and solution architecture for the Customer Facing Asset within wider IT Compliance Ensure that the roadmap contains the new and yet-to-release features of the core base products like Adobe experience Manager, Node JS, React JS, Solid JS, AWS, DevOps pipeline, Adobe Target, Adobe and Google Analytics, NewRelic, Akamai and various other frameworks Must be able to create a validation framework to measure and report the effectiveness of Architecture Must be able to create a culture of industry benchmarking before releasing or opting any new product/framework and be able to define a robust roadmap and evolution of the same with respect to the current and future needs of the One Web Platform Collaborate with IT Teams, Marketing teams, Data teams and partners across the organization to create a sustainable and achievable framework for the platform Must be able to create a strong understanding of the backend infrastructure and systems while delivering a dynamic, personalized and customer first integrated asset Work collaboratively with various partners to define the Security Architecture of platform including Video hosting, Caching, Security feature like DOS Executing POCs to validate technology roadmaps, feasibilities & possibilities with scalable solutions which are also versatile, inter-operable, can co-exist in the overall ecosystem and cost effective Must create a wholistic Auto Scalable and Highly Available environment across all key components including Node servers, AEM Servers, DAM and other such critical components of the One Web Asset Leverage and sponsor innovation work, both through internal incubators and company's external start-up network to create, evaluate, and introduce novel technical capabilities into the platform Foster a culture of innovation and engineering excellence across the Enterprise: modern engineering practices, adoption of open source and open standards, creating a culture of collaboration and efficiency Ensure that throughout the year including peak sales season, digital assets continue to perform the best by suggesting robust technology frameworks, right infrastructure, and correct data flow processes Analyze data like drop-offs, bounce rate etc. to constantly evaluate and improve process flows and to identify any tool ideas for processes improvements that can be built to attract the online customer Partner with Engineering teams across BFL to create an environment that provides an optimal Infrastructure Developer Experience for, from IDE and CI/CD through to IaaS provisioning and Cloud Native Service on-boarding frameworks|2. Leadership and Team development Add strategic value to processes through competition mapping and best practices adoption Scout the technology landscape to ensure adoption of emerging solutions and maintain innovative edge Participate in Project presentation with project priorities, timelines, quarterly plans, etc. to Vertical Head for sign-off Inspire and influence others to think differently, solve problems, and seize opportunities Work with cross-functional teams to set and achieve targets for cross-selling Determine individual training needs & development plans to build expertise and enhance skills Set objectives, conduct reviews, and close appraisal processes for the team as per timelines Ensure high employee engagement and morale through right management interventions while ensuring a deep emotional intelligence in approach Establish performance expectations and regularly review individual performance of the team Identify and create development opportunities for team members to enhance technical knowledge Work towards customer business outcomes, ensuring there is a strong connection between delivery activities and business objectives Key Decisions / Dimensions  Recommendations on existing architecture of AEM to get it integrated with NODE JS and REACT JS as major architecture component to build optimal solution to handle very high traffic with minimal infra  Development workflow definition to reduce major gaps and bandwidth challenges  Onboard and offload Partner and internal resource on basis of POD requirement for deliverables  Internal and external training program for the freshers and byte employee to build their career as per interest  Development build checklist for every deployment to maintain hygiene on PROD servers  API structure and integration approaches to build Mobile and Web App  Common content across both APP and Web platform to reduce repetitive task and steps  Product and technology evaluation to meet the business use cases/requirement  Finance evolution for technology unit within Marketing department  All decisions towards quality delivery to release quality products Major Challenges  Innovative architecture definition which integrates seamlessly with Marketing product suites and tools  Data driven architecture to utilize user behavioral and transactional data to provide prefer user experience for acquisitions of new users  New finance products and capabilities understanding to build business driven solutions with collaboration of data and marketing products  Systems and technologies need to be continuously evolved/ changed within minimum time to manage growing business volumes  Constant training to byte hires and new joiners for optimum results Required Qualifications And Experience Qualifications Tech – Computer Science and Engineering Work Experience  Minimum 9-12 years of experience in software development with a strong focus on web content management systems particularly AEM, React JS, Solid JS, Node Js along with DevOps practices  Industry Knowledge: Knowledge of the finance industry and experience in leading technical deliveries.  Technical Expertise: Proficiency in Java/JEE, AEM, and associated technologies like OSGi, Sling, JCR, Apache, React JS, Solid JS, Node JS, Akamai  Frontend Skills: Solid knowledge of HTML5, CSS3, JavaScript and related frameworks (React JS, Solid JS). Experience with frontend technologies like Bootstrap, Backbone.js, ReactJS, Handlebars, Grunt, Angular, CSS3, HTML5, and jQuery.  Cloud and DevOps: Experience with cloud platforms (AWS, Azure) and DevOps tools (Jenkins, Maven). Strong knowledge of cloud-native approaches and platforms including AWS, Azure, or GCP. Experience with SaaS-based implementation of AEM as Cloud Service, AEM SDK (preferred).  Leadership: Strong leadership skills with the ability to manage and mentor development teams.  Project Management: Lead and involve in planning and estimations of Adobe projects. Lead all tracks of the project from frontend, backend, QA, and project management.  AEM Expertise: Strong hands-on experience in components, templates, taxonomy, metadata management, forward and reverse replication, workflow, content publishing and unpublishing, tagging, deployment (Maven), and content migration/planning.  Infrastructure: Strong physical architecture concepts (infrastructure) including load balancers (ELB), Apache setup, CDN, disaster recovery, recommending capacity of AEM publish and author instances.  Quality Assurance: Implemented quality processes for projects like continuous integration (Bamboo/Jenkins/Git/BitBucket/Cloud Manager), SonarQube, code reviews (manual and automated), code formatters, automation testing, etc.

Posted 1 week ago

Apply

4.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

JOB ROLE - Python Developer Primary Skills : Strong in Python programming, Pyspark queries, AWS, GIS Secondary Skill : Palantir Responsibilities • Develop and enhance data-processing, orchestration, monitoring, and more by leveraging popular open-source software, AWS, and GitLab automation. • Collaborate with product and technology teams to design and validate the capabilities of the data platform • Identify, design, and implement process improvements: automating manual processes, optimizing for usability, re-designing for greater scalability • Provide technical support and usage guidance to the users of our platform’s services. • Drive the creation and refinement of metrics, monitoring, and alerting mechanisms to give us the visibility we need into our production services. Qualifications • Experience building and optimizing data pipelines in a distributed environment • Experience supporting and working with cross-functional teams • Proficiency working in Linux environment • 4+ years of advanced working knowledge of SQL, Python, and PySpark PySpark queries --- MUST • Knowledge on Palantir • Experience using tools such as: Git/Bitbucket, Jenkins/CodeBuild, Code Pipeline • Experience with platform monitoring and alerts tools

Posted 1 week ago

Apply

0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Location : Hyderabad( 5 days work from office) Role : Palantir Tech Lead Skills: Python, Pyspark and Palantir Tasks and Responsibilities: Leads data engineering activities on moderate to complex data and analytics-centric problems which have broad impact and require in-depth analysis to obtain desired results; assemble, enhance, maintain, and optimize current, enable cost savings and meet individual project or enterprise maturity objectives. advanced working knowledge of SQL, Python, and PySpark Experience using tools such as: Git/Bitbucket, Jenkins/CodeBuild, CodePipeline Experience with platform monitoring and alerts tools Work closely with Subject Matter Experts (SMEs) to design and develop Foundry front end applications with the ontology (data model) and data pipelines supporting the applications Implement data transformations to derive new datasets or create Foundry Ontology Objects necessary for business applications Implement operational applications using Foundry Tools (Workshop, Map, and/or Slate) Actively participate in agile/scrum ceremonies (stand ups, planning, retrospectives, etc.) Create and maintain documentation describing data catalog and data objects Maintain applications as usage grows and requirements change Promote a continuous improvement mindset by engaging in after action reviews and sharing learnings Use communication skills, especially for explaining technical concepts to nontechnical business leader

Posted 1 week ago

Apply

0 years

0 Lacs

Hyderabad, Telangana, India

On-site

We need some one very strong in Python Programming, Pyspark queries, AWS,GIS, Palantir Foundry . Skills: Python, Pyspark and Palantir Tasks and Responsibilities: Leads data engineering activities on moderate to complex data and analytics-centric problems which have broad impact and require in-depth analysis to obtain desired results; assemble, enhance, maintain, and optimize current, enable cost savings and meet individual project or enterprise maturity objectives. Advanced working knowledge of SQL, Python, and PySpark E xperience using tools such as: Git/Bitbucket, Jenkins/CodeBuild, CodePipeline Experience with platform monitoring and alerts tools Work closely with Subject Matter Experts (SMEs) to design and develop Foundry front end applications with the ontology (data model) and data pipelines supporting the applications Implement data transformations to derive new datasets or create Foundry Ontology Objects necessary for business applications Implement operational applications using Foundry Tools (Workshop, Map, and/or Slate) Actively participate in agile/scrum ceremonies (stand ups, planning, retrospectives, etc.) Create and maintain documentation describing data catalog and data objects Maintain applications as usage grows and requirements change.

Posted 1 week ago

Apply

19.0 years

0 Lacs

Kolkata metropolitan area, West Bengal, India

On-site

Company Description Webguru Infosystems is an ISO 9001:2015 certified global digital solutions provider with over 100 full-time developers. For more than 19 years, we have been delivering cost-effective, customized, and scalable web and mobile solutions to accelerate growth and drive meaningful differentiation for clients such as Zee Entertainment, Hindalco Industries, and HDFC. Specializing in web development, mobile apps, and digital marketing, we have successfully completed over 5000 projects in 32+ countries. Our core technology stack includes NodeJS, Angular, React, Laravel, and more. Job Description Developing software to a high standard using professional software engineering principles and practices. Test, troubleshoot, and optimise application components for maximum speed, security, stability, and scalability. Required Candidate Profile : Sound knowledge of Laravel programming. Knowledge of at least one front end JavaScript framework (Vue.js/ React Js). Knowledge of at least one CMS (Wordpress / Opencart / Shopify / Magento) Database - MySQL / PostgreSQL. Comprehensive knowledge of code hosting platform for version control and collaboration (such as GitHub and bitbucket). Experience in Node JS will be added advantage. Experience : A minimum of 4 years experience is required

Posted 1 week ago

Apply

7.0 years

0 Lacs

Hyderabad, Telangana, India

Remote

Do you want to work on innovative projects, collaborate with a dynamic and supportive team, and receive investment in your professional development? At DTCC, we are at the forefront of innovation in the financial markets. We are committed to helping our employees grow and succeed. We believe that you have the skills and drive to make a real impact. We foster a thriving internal community and are committed to creating a workplace that looks like the world that we serve. Pay and Benefits: Competitive compensation, including base pay and annual incentive Comprehensive health and life insurance and well-being benefits, based on location Pension / Retirement benefits Paid Time Off and Personal/Family Care, and other leaves of absence when needed to support your physical, financial, and emotional well-being. DTCC offers a flexible/hybrid model of 3 days onsite and 2 days remote (onsite Tuesdays, Wednesdays and a third day unique to each team or employee). The Impact you will have in this role: The SDET role specializes in development of capabilities and code to automate testing of the product or system under test. You will be part of a business-aligned development squad that specializes in building, enhancing and maintaining custom software and solutions supporting the business. Your Primary Responsibilities : Design, develop, and execute automated tests to ensure the quality of software products. As a squad member, work to understand software requirements. Design, develop and maintain automated tests that validate the functionality. As a squad member work to identify and resolve defects and improve the overall software testing process. Implement and maintain continuous integration and continuous delivery (CICD) processes. Diagnose and remediate software defects. Stay current with new software testing methodologies, tools, and technologies. Understand and apply industry-specific best practices & standards. Establish level of expertise in the business areas, systems and platforms being supported. Responsible for system integration testing, including automation, of newly developed or enhanced applications. Responsible for reviewing requirements and design artifacts, developing unit, integration and system test cases, ensuring extensive test coverage for our applications. Understand and articulate the business and the value adds. Play an active role in translating business and functional requirements into concrete results. Responsible for Project Coordination and Technical Management tasks **NOTE: The Primary Responsibilities of this role are not limited to the details above. ** Qualifications: Proven min experience of 7-10 years with automated unit and integration testing. Bachelor’s degree in computer science, Software Engineering, or a related field. Talents Needed for Success: Minimum of 7+ years of related experience. Hands on experience in API/XML Testing and Selenium with Java. Extensive experience with testing modern scripting language-based components. Experience in writing SQL queries. Experience with JIRA, Microfocus ALM, Bitbucket, Git, and Jenkins. Experience with front-end & backend testing using Java and Selenium. Experienced in Agile/Waterfall, onsite/offshore work model and coordination. In depth knowledge of the software implementation lifecycle (specifically the testing model, methodology, and processes). Detailed understanding of smoke testing, black- box and both non-blackbox testing, as well as regression testing. Requires the ability to work well with both business clients and technical team independently, both individually and as part of a team. Preferably from Financial domain. Actual salary is determined based on the role, location, individual experience, skills, and other considerations. We are an equal opportunity employer and value diversity at our company. We do not discriminate on the basis of race, religion, color, national origin, sex, gender, gender expression, sexual orientation, age, marital status, veteran status, or disability status. We will ensure that individuals with disabilities are provided reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request accommodation.

Posted 1 week ago

Apply

15.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Primary Skills : Strong in Python Programming, Pyspark queries, AWS,GIS, Palantir Foundry Experience : 15+ Years Responsibilities • Develop and enhance data-processing, orchestration, monitoring, and more by leveraging popular open-source software, AWS, and GitLab automation. • Collaborate with product and technology teams to design and validate the capabilities of the data platform • Identify, design, and implement process improvements: automating manual processes, optimizing for usability, re-designing for greater scalability • Provide technical support and usage guidance to the users of our platform’s services. • Drive the creation and refinement of metrics, monitoring, and alerting mechanisms to give us the visibility we need into our production services. Qualifications • Experience building and optimizing data pipelines in a distributed environment • Experience supporting and working with cross-functional teams • Proficiency working in Linux environment • 4+ years of advanced working knowledge of SQL, Python, and PySpark PySpark queries --- MUST • Knowledge on Palantir • Experience using tools such as: Git/Bitbucket, Jenkins/CodeBuild, Code Pipeline • Experience with platform monitoring and alerts tools

Posted 1 week ago

Apply

5.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Role: Sr. Java Springboot API Developer Location: Hyderabad, Telangana Required Skills: 5+ years of software engineering work experience in developing integrated and secure enterprise or web-based applications using Java/J2EE. Experience building scalable distributed systems. Excellent understanding of multiple programming languages, frameworks, and databases Experience working in building applications in Java, SpringBoot, Microservices, RESTAPIs Experience with Continuous Integration and Continuous Delivery (CI/CD) environment (Jenkins, BitBucket) and their frameworks. Contributes to design, development, troubleshooting, debugging, evaluating, modifying, deploying, and documenting software and systems that meet the needs of customer-facing applications, business applications, and/or internal end user applications Understanding of writing, documenting, and building REST APIs Familiar with how to write complex SQL queries Experience with collecting requirements, creating software designs and developing efficient implementations Familiarity with full-stack development Function as an active member of an agile team by contributing to software builds through consistent development practices (tools, common components, and documentation) Debug basic software components and identify code defects for remediation Support and monitoring of software across test, integration, and production environments Explores new automation techniques to refine the agility, speed and quality of engineering initiatives and efforts Defines test conditions based on the requirements and specifications provided. Takes part in reviews of own work and handles work efficiently. Knowledge of Cloud and passion for learning new technologies, industry trends and deep technical curiosity

Posted 1 week ago

Apply

4.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Responsibilities • Develop and enhance data-processing, orchestration, monitoring, and more by leveraging popular open-source software, AWS, and GitLab automation. • Collaborate with product and technology teams to design and validate the capabilities of the data platform • Identify, design, and implement process improvements: automating manual processes, optimizing for usability, re-designing for greater scalability • Provide technical support and usage guidance to the users of our platform’s services. • Drive the creation and refinement of metrics, monitoring, and alerting mechanisms to give us the visibility we need into our production services. Qualifications • Experience building and optimizing data pipelines in a distributed environment • Experience supporting and working with cross-functional teams • Proficiency working in Linux environment • 4+ years of advanced working knowledge of SQL, Python, and PySpark PySpark queries --- MUST • Knowledge on Palantir • Experience using tools such as: Git/Bitbucket, Jenkins/CodeBuild, Code Pipeline • Experience with platform monitoring and alerts tools

Posted 1 week ago

Apply

0 years

0 Lacs

Pune, Maharashtra, India

On-site

About VOIS VO IS (Vodafone Intelligent Solutions) is a strategic arm of Vodafone Group Plc, creating value and enhancing quality and efficiency across 28 countries, and operating from 7 locations: Albania, Egypt, Hungary, India, Romania, Spain and the UK. Over 29,000 highly skilled individuals are dedicated to being Vodafone Group’s partner of choice for talent, technology, and transformation. We deliver the best services across IT, Business Intelligence Services, Customer Operations, Business Operations, HR, Finance, Supply Chain, HR Operations, and many more. Established in 2006, _VO IS has evolved into a global, multi-functional organisation, a Centre of Excellence for Intelligent Solutions focused on adding value and delivering business outcomes for Vodafone. About VOIS India In 2009, VO IS started operating in India and now has established global delivery centres in Pune, Bangalore and Ahmedabad. With more than 14,500 employees, VO IS India supports global markets and group functions of Vodafone, and delivers best-in-class customer experience through multi-functional services in the areas of Information Technology, Networks, Business Intelligence and Analytics, Digital Business Solutions (Robotics & AI), Commercial Operations (Consumer & Business), Intelligent Operations, Finance Operations, Supply Chain Operations and HR Operations and more. Core Competencies Excellent knowledge on EKS, Kubernetes and its related AWS Component. Kubernetes Networking Kubernetes DevOps which includes Deployment of Kubernetes – EKS Cluster using IaaC (Terraform) and CI/CD pipeline. EKS Secret Management, Autoscaling and Lifecycle Management. EKS Security using AWS Native Services. Excellent Understanding on AWS cloud services like VPC, EC2, ECS, S3, EBS, ELB, Elastic IPs, Security Group etc. AWS Component deployment using Terraform Application Onboarding on Kubernetes using Argocd AWS Codepipeline, Codebuild, Code Commit HashiCorp Stack, HasiCorp Packer. Bitbucket and Git, Profound Cloud Technology, Network, Security and Platform Expertise (AWS or Google Cloud or Azure) Good documentation and communication skills. Good Understanding on ELK, Cloudwatch, datadog Roles & Responsibilites Manage project driven integration and day-to-day administration of cloud solutions Develop prototypes, designing and building modules and solutions for Cloud Platforms in an iterative agile cycles, develop, maintain, and optimize the business outcome Conduct peer reviews and maintain coding standards Driving automation using CI/CD using Jenkins or argcd Driving Cloud solution automation and integration activity for Cloud Provider - AWS and Tenant (Project) workloads. Build and deploy AWS cloud infrastructure by using cloud formation and terraform scripts. Use Ansible & Python to perform routines tasks like user management and security hardening, etc. Providing professional technical consultancy to migrate and transform existing on-premises applications to public cloud and support to all Cloud-related programmes and existing environments Design and deploy direct connect network between AWS and datacentre. Train and develop AWS expertise within the organisation. Proven troubleshooting skills to resolve issues related with cloud network, storage and performance management. VOIS Equal Opportunity Employer Commitment VO IS is proud to be an Equal Employment Opportunity Employer. We celebrate differences and we welcome and value diverse people and insights. We believe that being authentically human and inclusive powers our employees’ growth and enables them to create a positive impact on themselves and society. We do not discriminate based on age, colour, gender (including pregnancy, childbirth, or related medical conditions), gender identity, gender expression, national origin, race, religion, sexual orientation, status as an individual with a disability, or other applicable legally protected characteristics. As a result of living and breathing our commitment, our employees have helped us get certified as a Great Place to Work in India for four years running. We have been also highlighted among the Top 5 Best Workplaces for Diversity, Equity, and Inclusion , Top 10 Best Workplaces for Women , Top 25 Best Workplaces in IT & IT-BPM and 14th Overall Best Workplaces in India by the Great Place to Work Institute in 2023. These achievements position us among a select group of trustworthy and high-performing companies which put their employees at the heart of everything they do. By joining us, you are part of our commitment. We look forward to welcoming you into our family which represents a variety of cultures, backgrounds, perspectives, and skills! Apply now, and we’ll be in touch!

Posted 1 week ago

Apply

12.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Role Summary: We are seeking an experienced Azure DevOps Architect to design and lead the end-to-end migration of existing CI/CD pipelines to Azure DevOps, while designing and implementing a scalable, reusable CI/CD framework for enterprise applications. The ideal candidate will have deep expertise in DevOps architecture, pipeline standardization, and automation-first approaches, with a strong understanding of enterprise governance and cloud-native deployment models. Bitbucket to Github migration experience will be an added advantage Key Responsibilities: Analyze existing CICD pipelines, tools, and environments Define Azure DevOps architecture, including organizations, projects, and access policies Identify reusable CI/CD patterns across application teams Architect a modular, parameterized YAML-based CI/CD framework for multi-tech stack support with integration of best practices , DevSecOps standards and guardrails Collaborate with stakeholders and lead the phased rollout of application onboarding on the migrated CICD pipelines Mentor and Coach the Team to achieve the project milestones Required Skills & Experience: 12+ years in software engineering/DevOps; 3+ years as a DevOps Architect Experience in migrating CICD pipelines for enterprise applications preferably from TeamCity and Octopus Deploy to Azure DevOps Hands-on experience with Azure DevOps (Repos, Pipelines, Artifacts, Environments) Experience in migrating CI/CD from TeamCity and Octopus Deploy to Azure DevOps Experience with repository migration from Bitbucket to Github will be added advantage Proficiency in writing reusable YAML pipeline templates Strong scripting knowledge: Terraform, PowerShell, Bash, Python Familiarity with App Service, AKS, VMs, and containerized deployments Expertise in integrating tools like SonarQube, Snyk, Azure Artifacts, HashiCorp Vault Expertise in integration of Security tools in CICD Pipelines Excellent stakeholder communication and documentation skills Preferred Qualifications: • Azure Certifications (e.g., AZ-400, AZ-104, AZ-305) • Knowledge of Agile/Scrum delivery models S​YNECHRON’S DIVERSITY & INCLUSION STATEMENT Diversity & Inclusion are fundamental to our culture, and Synechron is proud to be an equal opportunity workplace and is an affirmative action employer. Our Diversity, Equity, and Inclusion (DEI) initiative ‘Same Difference’ is committed to fostering an inclusive culture – promoting equality, diversity and an environment that is respectful to all. We strongly believe that a diverse workforce helps build stronger, successful businesses as a global company. We encourage applicants from across diverse backgrounds, race, ethnicities, religion, age, marital status, gender, sexual orientations, or disabilities to apply. We empower our global workforce by offering flexible workplace arrangements, mentoring, internal mobility, learning and development programs, and more. All employment decisions at Synechron are based on business needs, job requirements and individual qualifications, without regard to the applicant’s gender, gender identity, sexual orientation, race, ethnicity, disabled or veteran status, or any other characteristic protected by law.

Posted 1 week ago

Apply

0 years

0 Lacs

Pune, Maharashtra, India

On-site

About VOIS VOIS (Vodafone Intelligent Solutions) is a strategic arm of Vodafone Group Plc, creating value and enhancing quality and efficiency across 28 countries, and operating from 7 locations: Albania, Egypt, Hungary, India, Romania, Spain and the UK. Over 29,000 highly skilled individuals are dedicated to being Vodafone Group’s partner of choice for talent, technology, and transformation. We deliver the best services across IT, Business Intelligence Services, Customer Operations, Business Operations, HR, Finance, Supply Chain, HR Operations, and many more. Established in 2006, _VOIS has evolved into a global, multi-functional organization, a Centre of Excellence for Intelligent Solutions focused on adding value and delivering business outcomes for Vodafone. #VOIS About VOIS India In 2009, VOIS started operating in India and now has established global delivery centers in Pune, Bangalore and Ahmedabad. With more than 14,500 employees, _VOIS India supports global markets and group functions of Vodafone, and delivers best-in-class customer experience through multi-functional services in the areas of Information Technology, Networks, Business Intelligence and Analytics, Digital Business Solutions (Robotics & AI), Commercial Operations (Consumer & Business), Intelligent Operations, Finance Operations, Supply Chain Operations and HR Operations and more. Job Description Roles and Responsibilities: Develop, deploy, and maintain Python-based applications and services on Google Cloud Platform. Build and maintain GCP-based infrastructure using tools like Google Cloud Functions, App Engine and Cloud Run. Implement automation and CI/CD pipelines using tools such as Jenkins, GitLab, or Google Cloud Build. Collaborate with data engineers to process large datasets, utilizing GCP services such as BigQuery, Cloud Storage Ensure best practices for performance, security, and scalability of cloud applications. Write unit tests and documentation to ensure maintainability and quality of code. Troubleshoot and debug cloud-based applications and services in a production environment. Strong proficiency in Python programming. Strong experience working with: CI/CD with Jenkins, Git/svn/bitbucket, JIRA Experience with cloud-native development Knowledge of containerization technologies (Docker, Kubernetes). Familiarity with RESTful API design and implementation. Understanding of database systems (SQL and NoSQL) and data processing frameworks. Strong problem-solving, debugging, and analytical skills. Familiarity with agile development methodologies. (e.g. SCRUM, SAFe) Familier with Datawarehousing concepts. Core Competencies, Knowledge And Experience Python Programming (PL) SQL (Oracle, Postgres or BigQuery) Experience with SOx compliant software testing Knowledge in Cloud Technologies Strong problem-solving, debugging, and analytical skills Docker Must Have Technical/professional Qualifications Python Programming (PL) SQL (Oracle, Postgres or BigQuery) Experience with SOx compliant software testing Knowledge in Cloud Technologies Strong problem-solving, debugging, and analytical skills Docker Not a perfect fit? Worried that you don’t meet all the desired criteria exactly? At Vodafone we are passionate about empowering people and creating a workplace where everyone can thrive, whatever their personal or professional background. If you’re excited about this role but your experience doesn’t align exactly with every part of the job description, we encourage you to still apply as you may be the right candidate for this role or another opportunity. What's In It For You Who we are We are a leading international Telco, serving millions of customers. At Vodafone, we believe that connectivity is a force for good. If we use it for the things that really matter, it can improve people's lives and the world around us. Through our technology we empower people, connecting everyone regardless of who they are or where they live and we protect the planet, whilst helping our customers do the same. Belonging at Vodafone isn't a concept; it's lived, breathed, and cultivated through everything we do. You'll be part of a global and diverse community, with many different minds, abilities, backgrounds and cultures. ;We're committed to increase diversity, ensure equal representation, and make Vodafone a place everyone feels safe, valued and included. If you require any reasonable adjustments or have an accessibility request as part of your recruitment journey, for example, extended time or breaks in between online assessments, please refer to https://careers.vodafone.com/application-adjustments/ for guidance. Together we can.

Posted 1 week ago

Apply

4.0 - 10.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Job Title: GitHub Migration Expert Experience: 4 to 10 Years Location: India Employment Type: Full-Time Job Summary: We are seeking a skilled and experienced GitHub Migration Expert to lead and execute version control system migrations (from Bitbucket, GitLab, Azure Repos, SVN, etc.) to GitHub Enterprise Cloud or GitHub Enterprise Server. The ideal candidate will have deep expertise in Git, DevOps, CI/CD, and repository structuring, along with strong communication skills to work with cross-functional teams. Key Responsibilities: • Lead end-to-end source code migration projects from platforms like Bitbucket, GitLab, Azure DevOps, or SVN to GitHub. • Design and implement GitHub repository structure, access controls, and branching strategies. • Develop and maintain migration scripts, automation tools, and documentation. • Collaborate with security, DevOps, and engineering teams to ensure smooth migration with minimal disruption. • Train and support development teams on GitHub best practices and workflows. • Work on integrating GitHub with CI/CD pipelines (Jenkins, Azure DevOps, GitHub Actions, etc.). • Troubleshoot and resolve any migration-related issues. Required Skills and Experience: • 4–10 years of experience in DevOps, Git administration, or source code management. • Proven experience in migrating large-scale repositories to GitHub (Cloud or Enterprise). • Strong understanding of Git, version control systems, and repository structuring. • Experience with GitHub Actions, GitHub Advanced Security, and GitHub APIs. • Scripting knowledge (Bash, Python, PowerShell) for automation tasks. • Knowledge of CI/CD tools and integration with GitHub. • Experience with user access control, role management, and SSO integration. • Excellent problem-solving and communication skills.

Posted 1 week ago

Apply

1.0 years

0 Lacs

Mumbai, Maharashtra, India

On-site

About Us Godrej Enterprises Group (comprising Godrej & Boyce and its subsidiaries) has a significant presence across diverse consumer and industrial businesses spanning Aerospace, Aviation, Defence, Engines and Motors, Energy, Locks & Security Solutions, Building Materials, Green Building Consulting, Construction and EPC Services, Heavy Engineering, Intralogistics, Tooling, Healthcare Equipment, Consumer Durables, Furniture, Interior Design, Architectural Fittings, IT solutions and Vending Machines. KRA Contribution to data ai, gen ai platform & product aspects in terms of ability to perform data science analysis and build AI systems Research , Select/Develop, deliver and deploy models for ML/AI use cases Contribute towards defining required AI frameworks to ensure Quality & compliance of AI models and systems Support departmental & organizational Initiatives related to analytics and AI Job Role Data Scientist will be required to build hypothesis, research, prototype, design, develop, and help implement enterprise level ML/ AI/Gen AI models for projects to transform and improve company’s business results and competitive position & ensure alignment to the overall digital, data & AI strategy and current and future business objectives. Job Description Contribution to Data & AI Platform and Products Work within Digital Data AI Team in development of data science & AI capabilities and features to support delivery on defined objectives around platform aspects Design, implement, and evolve robust, secure and quality solutions that operate for the business ecosystem. Research, design & development of High Quality Data, AI and analytical systems Define exploratory data analysis (EDA) keeping in line with problem necessities Ensure the development of quality procedures and standards products and supervising tests. Work on various data correction problems such as data cleansing, sourcing and integrating from multiple platforms to make the good data available for data analysis , data science & AI development Provide and help deliver the Solutions Hands on contribution to provide solutions and POCs , working in cross- functional or agile teams to develop and deliver significant aspects of the models and systems. Lead and mentor junior data scientists and ensure availability of necessary data and analysis as needed by business requirement and use cases. Conduct diagnostic across existing data and make future state recommendations in regular intervals. Collaborate with Business Analyst, Data scientists, Data Engineers and Data Analysts to ensure understanding and alignment between business needs and technical implementation. Monitor performance of existing solutions across use cases to identify and drive optimization. Oversee and Research, develop and analyse NLP, Gen AI , computer vision algorithms in Various use cases. Ensure model robustness, model generalization, accuracy, testability, and efficiency. Write product or system development code. Contributing via understanding of machine learning techniques and algorithms, including clustering, anomaly detection, optimization, neural network etc Responsible for deploying AI/ML models (ML Ops) in standalone and cloud based systems and services. Support Projects and Initiatives Help in Collaboration with business unit heads and corporate functions to identify and assist in providing data analysis support across different projects Identify data from legacy systems, to build new solutions based on requirement Provide necessary technical support in new and ongoing digital initiatives to ensure seamless data solutioning. Requisite Qualification Essential B.Tech. in Computer Science/ IT/ Data Science Preferred PhD or minimum M Tech in Computer Science/ IT/ Data Science/ with persuasion of PhD Good Data Science or Data Engineer Certification ( minimum 1 year programs etc) Requisite Experience Essential Minimum 8 Yrs with of experience in data analytics & data science, building, and maintaining various ML models. Good know how of emerging small and larger models 3 to 5 + years of experience in each of the Data Science specialization like NLP, Demand Forecasting, ML Ops Should be able to present portfolio of data science work or use cases Preferred Sound understanding of Data analysis to support the preparatory work Experience working in the agile Environment. Know how/ Familiarity in all aspects of MLOps (source control, continuous integration, deployments, etc.) Experience/ Exposure with Cloud data services like AWS or Azure Special Skills Required Functional: Excellent understanding of machine learning techniques and algorithms, including clustering, anomaly detection, optimization, neural network etc. Strong hands-on coding skills in Python, processing large-scale data set and developing machine learning models. Experience programming in Python, R, and SQL Expertise in developing ML models and deployment of the same Hands on working on developing NLP models using transformers and computer vision. Know-how of deploying AI/ML models (ML Ops) in standalone and cloud-based systems and services. Comfortable working with DevOps: Jenkins, Bitbucket, CI/CD SQL Server experience required Understanding of, dimensional data modelling, structured query language (SQL) skills, data warehouse and reporting techniques Data Governance & Ethics Leadership: Strong analytical skills, ability to ask right questions, analyse data and draw conclusion by making appropriate assumptions, to solve and model complex business requirements Ability to lead team of junior data scientists, get into the details of the problem and ability to code the solution hands on as and when needed Planning & Organizing Present complex data analysis in consumable way and Engage the stakeholders Ability to collaborate with different teams and clearly communicate solutions to both technical and non-technical team members Team player

Posted 1 week ago

Apply

7.0 - 10.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Role: Sr. java Developer Job Description 7-10 years of relevant experience in the Financial Service industry Knowledge and development experience in backend technologies: Java, Spring Boot, Kafka, Microservice Architecture, Multithreading Knowledge and development experience in database technologies: Oracle, MongoDB Knowledge and development experience in frontend technologies: Angular, TypeScript/JavaScript, HTML, CSS3/SASS Experience in containerization technologies: OpenShift, Kubernetes, Docker, AWS , etc. Familiarity with DevOps concepts, tools and continuous delivery pipelines: Git, Bitbucket, Jenkins, uDeploy, Tekton, Harness, Jira , etc. Intermediate level experience in Applications Development role Consistently demonstrates clear and concise written and verbal communication Demonstrated problem-solving and decision-making skills Ability to work under pressure and manage deadlines or unexpected changes in expectations or requirements

Posted 1 week ago

Apply

6.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Greeting from TCS! TCS has been a great pioneer in feeding the fire of young Techies like you. We are a global leader in the technology arena and there’s nothing that can stop us from growing together. Your role is of key importance, as it lays down the foundation for the entire project. Make sure you have a valid EP number before interview. To create an EP Number, please visit https://ibegin.tcs.com/iBegin/register Kindly complete the registration if you have not done it yet. Position: Release Manager - Copado Specialist Experience: 6+ years Location: Pan India Job Description: We are seeking experienced and proactive hands on Release Manager with Copado expertise to lead and oversee our Salesforce release management lifecycle. The ideal candidate will be responsible for managing deployments across various environments using Copado, ensuring code quality and compliance, and leading a high-performing release team along with the ability to identify and fix complex Copado related deployment issues. This role requires a deep understanding of Salesforce DevOps practices, excellent problem-solving skills, and proven experience in managing release cycles efficiently. Key Responsibilities: Lead end-to-end release management for Salesforce applications using Copado. Own and drive the release calendar, coordinating across multiple teams to ensure smooth deployments. Define and enforce deployment and rollback strategies, CI/CD pipelines, and governance models. Collaborate closely with developers, QA, product owners, and stakeholders to ensure alignment and transparency. Manage and mentor a team of release engineers and ensure timely and quality deployments. Troubleshoot and resolve deployment issues and conflicts. Maintain release documentation and ensure audit readiness. Continuously improve release processes through automation, process refinement, and best practices. Required Skills & Experience: 6+ years of experience in IT Release Management with at least 4+ years of hands-on Copado experience. Deep knowledge of Salesforce development lifecycle, metadata types, and environment strategies. Proven ability to manage complex release schedules and environments. Experience with version control systems (e.g., Git), CI/CD tools, and Agile methodologies. Strong leadership and team management skills. Excellent communication, coordination, and stakeholder management abilities. Copado certifications (e.g., Copado Fundamentals I/II) are highly preferred. Salesforce certifications (Administrator, Platform Developer, etc.) are a plus. Preferred Qualifications: Experience with other DevOps tools like Jenkins, Gearset, AutoRABIT, or Bitbucket is a plus. Familiarity with compliance and audit frameworks (e.g., SOX, GDPR) in a Salesforce environment. TCS Eligibility Criteria: BE/B.Tech/MCA/M.Sc./MS with minimum 6 years of relevant IT experience post Qualification. Only Full-Time courses would be considered. Candidates who have attended TCS interview in the last 1 month need not apply. Referrals are always welcome! Regards, Sangeethraj Hopper

Posted 1 week ago

Apply

4.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

About Company: They balance innovation with an open, friendly culture and the backing of a long-established parent company, known for its ethical reputation. We guide customers from what’s now to what’s next by unlocking the value of their data and applications to solve their digital challenges, achieving outcomes that benefit both business and society. Job Title: Architect experience 4+ years is mandatory. AWS Lambda, API Gateway, S3, React, MongoDB, ExpressJS, NodeJS Location: PAN INDIA Work Mode: Hybrid Mode Experience: 4+years Job Type: Contract to hire (C2H) Notice Period: - Immediate joiners. Architect experience 4+ years is mandatory. AWS Lambda, API Gateway, S3, React, MongoDB, ExpressJS, NodeJS Experience Requirement :Bachelor's Degree in Computer Science Engineering with overall 12-15 year's experience in IT industry. 1. 4+years of experience in designing and architecting public/private cloud using AWS services , Lambda, API gateway, S3, Cloudfront, Cloudwatch. 4+ Experience in designing end-to-end software architecture across the MERN stack (MongoDB, Express, React, Node.js). Act as the single point of contact for any technical delivery related to a project. Provide technical thought leadership to the team with solution decisions, technical issues and for internal organization initiatives. Enforce best coding practices and standards through code reviews, documentation, and mentoring. Work with product managers, UX teams, and QA engineers to deliver cohesive, business-aligned solutions. Exposure to implementation of CI/CD pipelines and automation frameworks in alignment with DevOps practices. Working knowledge of Git Git Flow Jira BitBucket. Experience working in an Agile development. Optional : OKTA based integration. Working knowledge of automated unit integration testing in JavaScript. Experience using REST principals in building APIs. Experience with API design using OpenAPI Swagger specifications

Posted 1 week ago

Apply

6.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Job Locations: - Bangalore Experience: 6 to 12 Years Key Requirement: · 6+ years of exp. Strong in designing and develop web application from scratch and can build strong foundation using React JS · Experience with React JS and Redux with ES6 JavaScript, HTML5, CSS3 Required · Strong knowledge of web architecture/design and web best practices Required. · Evaluating and understanding Business and Functional requirements · Proficient with tools such as Git, Bitbucket, JIRA, Confluence · Strong communication skills, Team Player and problem-solving skills are necessary. · Self-starter with Ability to lead small UI team. · Experience working in an Agile/Scrum environment. Primary Skills (Must Have) · React · Typescript · ES6 · Jest · HTML/CSS3

Posted 1 week ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies