Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
5.0 - 8.0 years
11 - 15 Lacs
Gurugram
Work from Office
Key Responsibilities : - Design, develop, and maintain scalable and secure backend applications using Java and Spring Boot 3.2 - Develop RESTful APIs and integrate with third-party services and internal systems - Work on Spring Batch for handling scheduled or high-volume background jobs - Design and develop microservices and ensure inter-service communication and data consistency - Build and maintain an intuitive portal/dashboard for internal/external stakeholders with appropriate backend logic - Optimize queries and work closely with the Oracle DB, writing complex SQL queries and performance tuning - Collaborate with front-end developers, QA engineers, and product managers to deliver high-quality solutions - Conduct code reviews and provide guidance to junior developers - Troubleshoot and debug application issues, perform root cause analysis, and implement effective solutions - Write clear, maintainable, and testable code with appropriate unit and integration tests - Take ownership of features from design to deployment and support - Participate in Agile ceremonies and contribute to sprint planning and retrospectives Required Skills and Experience : - 57 years of proven experience as a Java backend developer - Strong programming skills in Java with a deep understanding of object-oriented programming - Extensive experience in Spring Boot 3.2, Spring Batch, and Spring Job Scheduling - Proficiency in developing and consuming RESTful APIs - Hands-on experience with Microservices architecture and distributed systems - Solid experience in working with Oracle Database and writing optimized SQL queries - Experience in integrating backend services with front-end portals or dashboards - Strong understanding of software engineering best practices including coding standards, code reviews, source control management, build processes, testing, and operations - Excellent analytical and problem-solving skills must be able to analyze complex business requirements and build logical solutions - Familiarity with tools like Git, Maven/Gradle, Jenkins, and containerization platforms (Docker/Kubernetes) is a plus - Good communication and collaboration skills to work effectively in a team environment Nice to Have : - Experience in performance tuning and application profiling - Exposure to CI/CD pipelines and DevOps practices - Knowledge of front-end technologies (basic level) for better integration with backend Educational Qualification : - Bachelors or Masters degree in Computer Science, Engineering, or related field
Posted 4 weeks ago
5.0 - 10.0 years
7 - 17 Lacs
Gurugram
Work from Office
Job Title : Kafka Integration Specialist Job Description : We are seeking a highly skilled Kafka Integration Specialist to join our team. The ideal candidate will have extensive experience in designing, developing, and integrating Apache Kafka solutions to support real-time data streaming and distributed systems. Key Responsibilities : - Design, implement, and maintain Kafka-based data pipelines. - Develop integration solutions using Kafka Connect, Kafka Streams, and other related technologies. - Manage Kafka clusters, ensuring high availability, scalability, and performance. - Collaborate with cross-functional teams to understand integration requirements and deliver robust solutions. - Implement best practices for data streaming, including message serialization, partitioning, and replication. - Monitor and troubleshoot Kafka performance, latency, and security issues. - Ensure data integrity and implement failover strategies for critical data pipelines. Required Skills : - Strong experience in Apache Kafka (Kafka Streams, Kafka Connect). - Proficiency in programming languages like Java, Python, or Scala. - Experience with distributed systems and data streaming concepts. - Familiarity with Zookeeper, Confluent Kafka, and Kafka Broker configurations. - Expertise in creating and managing topics, partitions, and consumer groups. - Hands-on experience with integration tools such as REST APIs, MQ, or ESB. - Knowledge of cloud platforms like AWS, Azure, or GCP for Kafka deployment. Nice to Have : - Experience with monitoring tools like Prometheus, Grafana, or Datadog. - Exposure to DevOps practices, CI/CD pipelines, and infrastructure automation. - Knowledge of data serialization formats like Avro, Protobuf, or JSON. Qualifications : - Bachelor's degree in Computer Science, Information Technology, or related field. - 4+ years of hands-on experience in Kafka integration projects.
Posted 4 weeks ago
3.0 - 10.0 years
0 Lacs
chennai, tamil nadu
On-site
As the Head of Engineering for our AI-powered Autonomous Cloud Portal, you will play a crucial role in leading multiple engineering teams through the full software development life cycle. Your responsibilities will include fostering an innovative engineering culture, defining and executing technical strategies, collaborating with various teams, establishing agile practices, and ensuring compliance with security and regulatory standards. In this leadership position, you will be responsible for building, mentoring, and overseeing cross-functional teams encompassing backend, frontend, AI/ML, QA, and DevOps. Your focus will be on promoting innovation, ownership, quality, and continuous improvement within the engineering organization. Additionally, you will drive the recruitment, training, and performance management processes for engineering personnel. Your expertise in cloud-native systems, AI/ML integration, and DevOps practices will be essential for defining and executing the engineering roadmap aligned with business objectives. You will be tasked with upholding best practices in architecture, security, scalability, testing, and continuous integration/continuous deployment (CI/CD). Regularly reviewing technical designs and code to ensure the feasibility and high standards of AI-driven features will also be a key aspect of your role. Collaboration with Product Management, Solution Architects, and Cloud Architects will be imperative to deliver seamless features. You will be responsible for ensuring effective coordination between teams for planning, sprints, releases, and documentation, as well as managing stakeholder communications related to technical deliverables and team velocity. Establishing Agile practices, conducting sprint reviews and retrospectives, and implementing engineering key performance indicators (KPIs), quality assurance (QA) standards, and incident handling processes will fall under your purview. Furthermore, you will oversee compliance with security and regulatory requirements in cloud environments. To qualify for this role, you should possess a Bachelor's or Master's degree in Computer Science, Engineering, or a related field, along with at least 10 years of engineering experience, including a minimum of 3 years in leadership positions. Your technical expertise should include a strong background in Python, FastAPI/Django, and DevOps pipelines, as well as familiarity with system design, microservices, AI integration, and distributed systems. Experience in building AI/ML-enabled enterprise platforms, knowledge of cloud platforms such as AWS, Azure, or GCP, familiarity with compliance standards like ISO, NIST, SOC2, and knowledge of observability platforms like Grafana, Prometheus, and ELK would be advantageous. Join us to lead the development of an AI cloud automation platform, build high-performing engineering teams, and make a significant impact on architecture, product delivery, and AI integration within a cutting-edge technology environment.,
Posted 4 weeks ago
5.0 - 9.0 years
0 - 0 Lacs
karnataka
On-site
As a key member of our team, you will have a significant impact on the technical direction and infrastructure of our platform. Collaborating with engineers, designers, and founders, you will play a hands-on role in shaping our product and company values. This is a unique opportunity to make a lasting impression in a dynamic early-stage environment where your contributions can truly define our future. Responsibilities: - Design and implement the infrastructure in close collaboration with the team. - Work with engineers and designers to create a user-friendly platform that meets customer needs. - Define and execute the technical roadmap, balancing innovation and technical debt to achieve business goals. - Develop and maintain development processes, tools, and workflows. - Contribute to the product roadmap. Qualifications: - Bachelor's or Master's degree in Computer Science, Engineering, or related field. - 5+ years of experience as a DevOps engineer, specializing in cloud-native infrastructures. - Proficiency in AWS, GCP, and Azure, with knowledge of cost and billing. - Familiarity with CloudFormation, Terraform, Kubernetes, Docker, and distributed systems. - AWS, GCP, or Azure certifications are a plus. - Strong communication skills and ability to collaborate with cross-functional teams. - Experience in creating prototypes and rapid experimentation. Benefits: - Competitive salary within local market standards. - Performance-based bonuses. - Company equity options. - Comprehensive healthcare coverage. - Unlimited paid time off (subject to manager approval). - Quarterly team retreats and offsites. - Flexible work arrangements, including hybrid/remote options. Compensation: 2,200,000 INR - 3,500,000 INR.,
Posted 1 month ago
8.0 - 12.0 years
0 Lacs
karnataka
On-site
As a full stack developer at our company, you will be part of a dynamic team working on building a workflow automation system to streamline manual processes. Your role will involve working on multiple web applications, delivering features within agreed timelines, and ensuring high-quality deliveries. You will establish best practices for software development, mentor junior engineers, and collaborate with non-technical stakeholders to align engineering with business requirements. Additionally, you will guide developers in adopting the system and improving best practices. You must have a minimum of 8 years of programming experience with a focus on Javascript, including a track record of managing diverse deliveries and experience in design systems at scale. Your strong design and architectural skills will be crucial in developing highly scalable web applications that cater to millions of users. You should be adept at building adaptive UI components for web and mobile platforms, driving technical direction for design systems, and understanding the full software development lifecycle. Excellent communication skills are essential for this role, along with experience in startups, product-based companies, or hyper-growth environments. Your expertise in ReactJS, RESTful APIs, microservices, and distributed systems will be valuable, as well as your knowledge of Typescript, Redux, Redux Saga, Jest, HTML, and responsive design. Familiarity with Scrum Agile methodology, NodeJS, Backend For Frontend (BFF), Config Driven Design, and Microfrontends is preferred. Experience in native mobile development, Figma, Storybook, and documentation tools would also be advantageous. You should hold a BE/B.Tech or equivalent degree in Computer Science or a related field. If you are passionate about enhancing developer experience and creating scalable, reusable UI components, we encourage you to join our team and contribute to fulfilling your career aspirations.,
Posted 1 month ago
2.0 - 6.0 years
0 Lacs
telangana
On-site
The ideal candidate should have expertise in back-end programming languages like JavaScript and TypeScript, with a special focus on NodeJs. You should be proficient in writing HTML, CSS, and JavaScript. Experience working with RDBMS systems like Postgres or MySQL is required. You should have a strong background in implementing testing platforms and unit tests, as well as a good understanding of Git. An appreciation for clean and well-documented code is essential. Understanding of microservices and distributed systems is a plus. Experience in deploying applications using any cloud infrastructure is desirable. Familiarity with building pipelines using CI/CD tools such as Jenkins is a bonus. The candidate should have 2 to 5 years of relevant experience. If you meet these requirements and are interested in the position, please send your resumes to hr@rivan.in. We offer flexible working hours, allowing you to work at any time as we focus on the quality of work rather than the number of login hours. Additionally, upon successful completion of the internship period, you will receive an Internship Certificate from the company. We are also happy to provide a Letter of Recommendation based on your performance during your time with us.,
Posted 1 month ago
3.0 - 7.0 years
0 Lacs
noida, uttar pradesh
On-site
The contextualization platform enables large-scale data integration and entity matching across heterogeneous sources. The current engineering focus is to modernize the architecture for better scalability and orchestration compatibility, refactor core services, and lay the foundation for future AI-based enhancements. This is a pivotal development initiative with clear roadmap milestones and direct alignment with a multi-year digital transformation strategy. We are looking for a skilled and motivated Senior Backend Engineer with strong expertise in Kotlin to join a newly established scrum team responsible for enhancing a core data contextualization platform. This service plays a central role in associating and matching data from diverse sources - time series, equipment, documents, 3D objects - into a unified data model. You will lead backend development efforts to modernize and scale the platform by integrating with an updated data architecture and orchestration framework. This is a high-impact role contributing to a long-term roadmap focused on scalable, maintainable, and secure industrial software. Key Responsibilities: - Design, develop, and maintain scalable, API-driven backend services using Kotlin. - Align backend systems with modern data modeling and orchestration standards. - Collaborate with engineering, product, and design teams to ensure seamless integration across the broader data platform. - Implement and refine RESTful APIs following established design guidelines. - Participate in architecture planning, technical discovery, and integration design for improved platform compatibility and maintainability. - Conduct load testing, improve unit test coverage, and contribute to reliability engineering efforts. - Drive software development best practices including code reviews, documentation, and CI/CD process adherence. - Ensure compliance with multi-cloud design standards and use of infrastructure-as-code tooling (Kubernetes, Terraform). Qualifications: - 3+ years of backend development experience, with a strong focus on Kotlin. - Proven ability to design and maintain robust, API-centric microservices. - Hands-on experience with Kubernetes-based deployments, cloud-agnostic infrastructure, and modern CI/CD workflows. - Solid knowledge of PostgreSQL, Elasticsearch, and object storage systems. - Strong understanding of distributed systems, data modeling, and software scalability principles. - Excellent communication skills and ability to work in a cross-functional, English-speaking environment. - Bachelor's or Master's degree in Computer Science or related discipline. Bonus Qualifications: - Experience with Python for auxiliary services, data processing, or SDK usage. - Knowledge of data contextualization or entity resolution techniques. - Familiarity with 3D data models, industrial data structures, or hierarchical asset relationships. - Exposure to LLM-based matching or AI-enhanced data processing (not required but a plus). - Experience with Terraform, Prometheus, and scalable backend performance testing. About The Role And Key Responsibilities: - Develop Data Fusion - a robust, state-of-the-art SaaS for industrial data. - Solve concrete industrial data problems by designing and implementing delightful APIs and robust services on top of Data Fusion. - Work with distributed open-source software such as Kubernetes, Kafka, Spark, and similar to build scalable and performant solutions. - Help shape the culture and methodology of a rapidly growing company. ,
Posted 1 month ago
5.0 - 9.0 years
0 Lacs
noida, uttar pradesh
On-site
The contextualization platform enables large-scale data integration and entity matching across heterogeneous sources. Our current engineering focus is on modernizing the architecture for better scalability and orchestration compatibility, refactoring core services, and laying the foundation for future AI-based enhancements. This pivotal development initiative aligns directly with a multi-year digital transformation strategy and has clear roadmap milestones. We are searching for a skilled and motivated Senior Backend Engineer with strong expertise in Kotlin to join our newly established scrum team responsible for enhancing a core data contextualization platform. This service is crucial in associating and matching data from diverse sources such as time series, equipment, documents, and 3D objects into a unified data model. As a Senior Backend Engineer, you will lead backend development efforts to modernize and scale the platform by integrating with an updated data architecture and orchestration framework. This role is high-impact, contributing to a long-term roadmap focused on scalable, maintainable, and secure industrial software. Your key responsibilities will include designing, developing, and maintaining scalable, API-driven backend services using Kotlin, aligning backend systems with modern data modeling and orchestration standards, collaborating with engineering, product, and design teams for seamless integration, implementing and refining RESTful APIs, participating in architecture planning, technical discovery, and integration design, conducting load testing, improving unit test coverage, driving software development best practices, and ensuring compliance with multi-cloud design standards. To qualify for this role, you should have at least 5 years of backend development experience with a strong focus on Kotlin, the ability to design and maintain robust, API-centric microservices, hands-on experience with Kubernetes-based deployments, cloud-agnostic infrastructure, and modern CI/CD workflows, solid knowledge of PostgreSQL, Elasticsearch, and object storage systems, a strong understanding of distributed systems, data modeling, and software scalability principles, excellent communication skills, and a degree in Computer Science or a related discipline. Bonus qualifications include experience with Python, knowledge of data contextualization or entity resolution techniques, familiarity with 3D data models, industrial data structures, or hierarchical asset relationships, exposure to LLM-based matching or AI-enhanced data processing, experience with Terraform, Prometheus, and scalable backend performance testing. In this role, you will develop Data Fusion, a robust SaaS for industrial data, and work on solving concrete industrial data problems by designing and implementing APIs and services on top of Data Fusion. You will collaborate with application teams to ensure a delightful user experience and work with open-source software like Kubernetes, Kafka, Spark, databases such as PostgreSQL and Elasticsearch, and storage systems like S3-API-compatible blob stores. At GlobalLogic, we offer a culture of caring, learning and development opportunities, interesting and meaningful work, balance and flexibility, and a high-trust organization where integrity is key. Join us as we continue to collaborate with clients in transforming businesses and redefining industries through intelligent products, platforms, and services.,
Posted 1 month ago
2.0 - 6.0 years
0 Lacs
karnataka
On-site
The role of Software Engineer II based in Bangalore with a 57 years of experience and immediate joiners notice period involves working in a fast-growing engineering team to develop scalable, secure, and cloud-native backend systems. As a Software Engineer II, you will be responsible for designing and developing backend services using Python (Flask/FastAPI), building scalable systems with robust algorithms and data structures, participating in architecture and design discussions, reviewing code for quality, performance, and security, troubleshooting and resolving complex system issues, working with SQL and NoSQL databases (MySQL, MongoDB), creating cloud-native apps on Microsoft Azure, utilizing Docker and Kubernetes for containerization and orchestration, documenting systems and sharing knowledge internally, and optionally integrating Generative AI models and pipelines while advocating for best practices in DevOps, testing, and system design. The key responsibilities of this position include developing backend services using Python (Flask/FastAPI), designing scalable systems with strong algorithms and data structures, participating in architecture and design discussions, reviewing code for quality, performance, and security, troubleshooting and fixing complex system issues, working with SQL and NoSQL databases (MySQL, MongoDB), building cloud-native apps using Microsoft Azure, using Docker and Kubernetes for containerization and orchestration, documenting systems and sharing knowledge internally, and optionally integrating Generative AI models and pipelines while promoting best practices in DevOps, testing, and system design. The required skills for this role encompass strong Python proficiency with Flask or FastAPI, a solid understanding of data structures, algorithms, and Object-Oriented Programming (OOP), experience with distributed systems and REST APIs, knowledge of MySQL and MongoDB, hands-on experience with Azure cloud services, proficiency in Docker and Kubernetes, good debugging and performance tuning abilities, familiarity with microservices and system architecture, and strong communication and collaboration skills. Preferred skills for this position include experience with GenAI integration, familiarity with CI/CD and Infrastructure-as-Code (Terraform, Azure DevOps), knowledge of observability tools (Prometheus, Grafana, ELK), and a background in SaaS or high-scale backend systems. Mandatory technical skills required for this role include proficiency in Python programming language, experience with Flask or FastAPI frameworks, understanding of core concepts such as Data Structures, Algorithms, and Object-Oriented Programming (OOP), expertise in System Design including Distributed Systems and RESTful APIs, familiarity with Relational databases like MySQL or SQL Server and NoSQL databases like MongoDB, knowledge of Microsoft Azure cloud platform (Compute, Storage, Networking, Monitoring), hands-on experience with Docker and Kubernetes for containerization and orchestration, strong debugging and performance tuning skills, and comprehension of Microservices-based architecture. Join us in this exciting opportunity to contribute to building cutting-edge backend systems and leveraging innovative technologies in a collaborative and dynamic environment. Regards, Daina Infosys BPM Recruitment team,
Posted 1 month ago
0.0 - 4.0 years
0 Lacs
pune, maharashtra
On-site
As a software engineer at Google, you will have the opportunity to work on cutting-edge technologies that impact how billions of users connect, explore, and engage with information globally. The products you work on will be required to handle data at a massive scale, going beyond traditional web search. We are seeking individuals who can bring innovative ideas from diverse backgrounds such as information retrieval, distributed computing, system design, networking, security, artificial intelligence, and more. You will be involved in projects critical to Google's needs, with the flexibility to switch teams and projects as both you and the fast-paced business evolve. Versatility, leadership, and a passion for tackling new challenges across the full technology stack are essential qualities we look for in our engineers as we strive to advance technology continually. As a pivotal member of a dynamic team, your responsibilities will include designing, testing, deploying, and maintaining software solutions. Google values engineers with a wide range of technical skills who are eager to address some of the most significant technological challenges and make a meaningful impact on users worldwide. Our engineers work not only on search enhancements but also on scalability solutions, large-scale applications, and innovative platforms for developers across diverse Google products. Your role will involve researching, conceptualizing, and developing software applications to enhance and expand Google's product portfolio. You will contribute to various projects that leverage technologies like natural language processing, artificial intelligence, data compression, machine learning, and search technologies. Collaboration on scalability issues related to data access and information retrieval will be a key part of your responsibilities, along with solving complex challenges presented to you. If you have a Bachelor's degree or equivalent practical experience and a background in Unix/Linux environments, distributed systems, machine learning, information retrieval, and TCP/IP, along with programming skills in C, C++, Java, or Python, we encourage you to apply. A Bachelor's or advanced degree in Computer Science, Computer Engineering, or a related field is preferred. By joining Google, you will be part of an engineering-driven organization that fosters innovation and offers opportunities to work on groundbreaking technologies that shape the future of global connectivity and information access.,
Posted 1 month ago
8.0 - 12.0 years
0 Lacs
indore, madhya pradesh
On-site
You should have expert-level proficiency in Python and Python frameworks or Java. You must have hands-on experience with AWS Development, PySpark, Lambdas, Cloud Watch (Alerts), SNS, SQS, CloudFormation, Docker, ECS, Fargate, and ECR. Deep experience with key AWS services like Compute (PySpark, Lambda, ECS), Storage (S3), Databases (DynamoDB, Snowflake), Networking (VPC, 53, CloudFront, API Gateway), DevOps/CI-CD (CloudFormation, CDK), Security (IAM, KMS, Secrets Manager), Monitoring (CloudWatch, X-Ray, CloudTrail), and NoSQL Databases like Cassandra, PostGreSQL is required. You should have very strong hands-on knowledge of using Python for integrations between systems through different data formats. Expertise in deploying and maintaining applications in AWS, along with hands-on experience in Kinesis streams and Auto-scaling, is essential. Designing and implementing distributed systems and microservices, and following best practices for scalability, high availability, and fault tolerance are key responsibilities. Strong problem-solving and debugging skills are necessary for this role. You should also have the ability to lead technical discussions and mentor junior engineers. Excellent written and verbal communication skills are a must. Comfort working in agile teams with modern development practices and collaborating with business and other teams to understand business requirements and work on project deliverables is expected. Participation in requirements gathering, understanding, designing a solution based on available framework and code, and experience with data engineering tools or ML platforms (e.g., Pandas, Airflow, SageMaker) are required. An AWS certification such as AWS Certified Solutions Architect or Developer is preferred. This position is based in multiple locations including Indore, Mumbai, Noida, Bangalore, Chennai in India. Qualifications: - Bachelor's degree or foreign equivalent required from an accredited institution. Consideration will be given to three years of progressive experience in the specialty in lieu of every year of education. - At least 8+ years of Information Technology experience.,
Posted 1 month ago
5.0 - 9.0 years
0 Lacs
pune, maharashtra
On-site
You are a Lead Scala Developer with 48 years of development experience, specializing in Akka or LAGOM frameworks. Your expertise lies in building scalable microservices using Scala, Akka, and/or LAGOM, as well as containerized applications with Docker and Kubernetes. You have practical experience in managing real-time messaging with Apache Pulsar and integrating databases using Slick Connector and PostgreSQL. Additionally, you are proficient in enabling search and analytics features with ElasticSearch and collaborating with GitLab CI/CD pipelines for deployment workflows. Your role involves developing and maintaining scalable backend systems for data-intensive applications, emphasizing high performance, innovation, and clean code. You will work on real-time, distributed systems, requiring a deep understanding of microservices architecture and tools such as Apache Pulsar, ElasticSearch, and Kubernetes. Collaboration across teams is crucial as you write clean, well-structured, and maintainable code. Your background includes a Bachelor's or Master's degree in Computer Science, Engineering, or a related technical field. You possess strong software engineering and documentation skills, ensuring the quality and efficiency of your work. While experience with Kafka or RabbitMQ, monitoring/logging tools, frontend frameworks like React or Angular, and cloud platforms like AWS, GCP, or Azure are considered advantageous, they are not mandatory requirements. Joining this role offers you the opportunity to work on high-performance systems with modern architecture, in a collaborative and growth-oriented environment. You will have access to cutting-edge tools, infrastructure, and learning resources, with prospects for long-term growth, upskilling, and mentorship. The role also promotes a healthy work-life balance with onsite amenities and team events.,
Posted 1 month ago
3.0 - 7.0 years
0 Lacs
karnataka
On-site
As an integral part of Omnissa, you will have the opportunity to contribute to the development and enhancement of Workspace ONE, an innovative digital workspace platform that ensures secure access to applications on various devices. Your role will involve designing and developing scalable software solutions for Unified Endpoint Management (UEM) Platform. You will play a crucial part in writing code, implementing new use cases, and enhancing the current system to cater to diverse platform businesses. In your journey at Omnissa, your success will be measured by your ability to produce high-quality software designs, execute them effectively, and continuously improve the product. You will be expected to collaborate with cross-functional teams, contribute to codebases, and identify opportunities for enhancing the scalability, usability, and supportability of the product. Additionally, you will work on a distributed application with an event-driven architecture, utilize technologies such as C#, .NET Framework, SQL/PostgreSQL/Open Search, Kafka/Redis/RabbitMQ for communications, and Asp.Net MVC along with Angular for front-end development. To excel in this role, you are required to possess a Bachelor's or Master's degree in Computer Science or a related field, along with proficiency in C# and .NET Framework. Your understanding of distributed systems, object-oriented design, and multi-threaded programming will be crucial for the role. Furthermore, your ability to troubleshoot, analyze logs, and ensure code quality through various testing methodologies will play a significant role in your success. You should exhibit a strong sense of ownership, prioritize security and compliance considerations, and have experience in large-scale enterprise technology deployments and cloud computing. Omnissa values diversity and inclusivity in its workforce, aiming to create an environment that fosters innovation and success. We are an Equal Opportunity Employer, committed to providing an equal platform for all individuals based on merit. If you are passionate about driving technological advancements, shaping the future of work, and contributing to a global team, we encourage you to join us on our journey towards creating impactful and secure digital workspaces. Location: Bengaluru Location Type: Hybrid/ONSITE,
Posted 1 month ago
5.0 - 9.0 years
0 Lacs
hyderabad, telangana
On-site
You will be responsible for designing architectures for meta-learning, self-reflective agents, and recursive optimization loops. Your role will involve building simulation frameworks for behavior grounded in Bayesian dynamics, attractor theory, and teleo-dynamics. Additionally, you will develop systems that integrate graph rewriting, knowledge representation, and neurosymbolic reasoning. Conducting research on fractal intelligence structures, swarm-based agent coordination, and autopoietic systems will be part of your responsibilities. You are expected to advance Mobius's knowledge graph with ontologies supporting logic, agency, and emergent semantics. Integration of logic into distributed, policy-scoped decision graphs aligned with business and ethical constraints is crucial. Furthermore, publishing cutting-edge results and mentoring contributors in reflective system design and emergent AI theory will be part of your duties. Lastly, building scalable simulations of multi-agent, goal-directed, and adaptive ecosystems within the Mobius runtime is an essential aspect of the role. In terms of qualifications, you should have proven expertise in meta-learning, recursive architectures, and AI safety. Proficiency in distributed systems, multi-agent environments, and decentralized coordination is necessary. Strong implementation skills in Python are required, with additional proficiency in C++, functional, or symbolic languages being a plus. A publication record in areas intersecting AI research, complexity science, and/or emergent systems is also desired. Preferred qualifications include experience with neurosymbolic architectures and hybrid AI systems, fractal modeling, attractor theory, complex adaptive dynamics, topos theory, category theory, logic-based semantics, knowledge ontologies, OWL/RDF, semantic reasoners, autopoiesis, teleo-dynamics, biologically inspired system design, swarm intelligence, self-organizing behavior, emergent coordination, and distributed learning systems. In terms of technical proficiency, you should be proficient in programming languages such as Python (required), C++, Haskell, Lisp, or Prolog (preferred for symbolic reasoning), frameworks like PyTorch and TensorFlow, distributed systems including Ray, Apache Spark, Dask, Kubernetes, knowledge technologies like Neo4j, RDF, OWL, SPARQL, experiment management tools like MLflow, Weights & Biases, and GPU and HPC systems like CUDA, NCCL, Slurm. Familiarity with formal modeling tools like Z3, TLA+, Coq, Isabelle is also beneficial. Your core research domains will include recursive self-improvement and introspective AI, graph theory, graph rewriting, and knowledge graphs, neurosymbolic systems and ontological reasoning, fractal intelligence and dynamic attractor-based learning, Bayesian reasoning under uncertainty and cognitive dynamics, swarm intelligence and decentralized consensus modeling, top os theory, and the abstract structure of logic spaces, autopoietic, self-sustaining system architectures, and teleo-dynamics and goal-driven adaptation in complex systems.,
Posted 1 month ago
4.0 - 8.0 years
0 Lacs
pune, maharashtra
On-site
As a Senior Scala Developer Akka Specialist at our Pune office, you will be responsible for leading the implementation of complex backend systems using the Akka toolkit and Scala. In this senior-level engineering position, you will play a crucial role in developing actor-based applications and APIs, defining best practices for reactive and distributed systems, and mentoring junior developers. Additionally, you will contribute to architectural decisions, conduct design and code reviews, monitor performance, conduct root cause analysis, and drive improvements. Key Responsibilities: - Lead the development of actor-based applications and APIs - Define best practices for reactive and distributed systems - Mentor a team of Scala developers - Conduct design and code reviews - Monitor performance, conduct root cause analysis, and drive improvements To excel in this role, you must have strong expertise in Scala and the Akka toolkit, along with experience in leading distributed backend teams. Deep knowledge of Akka Actors, Streams, and Clustering, as well as familiarity with design patterns and architecture principles, are essential. Knowledge of CQRS, Event Sourcing, experience with Akka Persistence, and Cluster Sharding are considered good to have. Perks of this role include leadership responsibility in a fast-moving tech environment, access to enterprise tools, architecture planning, and opportunities to shape tech decisions. If you are passionate about Scala, distributed systems, Akka toolkit, architecture, actor-based applications, design patterns, and architecture principles, we encourage you to apply for this exciting opportunity.,
Posted 1 month ago
6.0 - 10.0 years
0 Lacs
karnataka
On-site
As a Software Engineer, you will be responsible for designing, developing, and maintaining scalable backend services and workflow orchestration components using Python and GoLang. You will collaborate with the Airflow and Temporal team to build and optimize data pipelines and asynchronous job execution frameworks. Your role will involve implementing and managing complex workflow logic using Apache Airflow and Temporal to ensure high code quality through unit testing, integration testing, and code reviews. Additionally, you will work closely with cross-functional teams, including Data Engineering, DevOps, and Platform Engineering. Your contributions will include participating in architectural discussions and decision-making processes to ensure scalable and maintainable systems. It is essential to write clear documentation and actively participate in knowledge-sharing sessions. To excel in this role, you should possess at least 5-7 years of professional software engineering experience. Strong hands-on programming skills in Python and GoLang are required, along with a solid understanding of concurrent and distributed systems. Previous experience with Apache Airflow and/or Temporal.io is highly beneficial. You should also have expertise in designing and developing robust APIs and backend services while being familiar with containerization tools such as Docker and CI/CD practices. A good understanding of the software development lifecycle (SDLC) and Agile methodologies is necessary. Excellent problem-solving, communication, and collaboration skills are key to success in this position. It would be advantageous to have experience with cloud platforms like AWS, GCP, or Azure. Exposure to microservices architecture and event-driven systems, as well as familiarity with monitoring and observability tools, would be considered a plus.,
Posted 1 month ago
6.0 - 10.0 years
0 Lacs
chennai, tamil nadu
On-site
The Digital Software Engineer Senior Manager accomplishes results through the management of professional teams and departments. Integrating subject matter and industry expertise within a defined area, you contribute to standards around which others will operate. You must have an in-depth understanding of how areas collectively integrate within the sub-function, coordinate and contribute to the objectives of the entire function, along with basic commercial awareness. Developed communication and diplomacy skills are essential to guide, influence, and convince others, particularly colleagues in other areas and occasional external customers. You are responsible for the volume, quality, timeliness, and delivery of end results of an area. You may have to plan, budget, and formulate policies within your area of expertise. You will be involved in short-term planning resource planning. As a Senior Manager, you have full management responsibility for your team, which may include managing people, budget, and planning. This includes duties such as performance evaluation, compensation, hiring, disciplinary actions, terminations, and budget approval. Responsibilities: - Possess the ability to continuously build a network of talent inside and outside of the company. - Create mechanisms to help onboard new talent to the organization and mentor others effectively. - Coach and provide feedback to direct reports to help develop talent and support career development. - Apply performance standards and identify resource needs for the team to set and balance goals across the team for optimal performance against department goals and employee development. - Design, implement, and deploy software components to solve difficult problems, generating positive feedback. - Have a solid understanding of development approaches and know how to best use them. - Work independently and with your team to deliver software successfully. - Deliver work consistently of high quality, incorporating best practices that your team trusts. - Rapidly provide useful code reviews for changes submitted by others. - Focus on operational excellence by identifying problems and proposing solutions, taking on projects to improve your team's software and making it better and easier to maintain. - Make improvements to your team's development and testing processes. - Establish good working relationships with team-mates and peers working on related software. - Recognize discordant views and take part in constructive dialogue to resolve them. - Train new team-mates confidently about your customers, the team's software, how it is constructed, tested, operates, and how it fits into the bigger picture. - Assess risk appropriately when making business decisions, demonstrating particular consideration for the firm's reputation and safeguarding Citigroup, its clients, and assets, by driving compliance with applicable laws, rules, and regulations. Qualifications: - 6-10 years of relevant experience in an Apps Development role or senior level experience in an Enterprise Architecture role with subject matter expertise in one or more areas. - Exhibit expertise in all aspects of technology by understanding broader patterns and techniques as they apply to Citi's internal and external cloud platforms (AWS, PCF, Akamai). - Lead resources and serve as a functional Subject Matter Expert (SME) across the company through advanced knowledge of algorithms, data structures, distributed systems, networking, and drive broader adoption forward. - Acquire relevant technology and financial industry skills (AWS PWS) and understand all aspects of NGA technology, including innovative approaches and new opportunities. - Demonstrate knowledge of automating code quality, code performance, unit testing, and build processing in the CI/CD pipeline. Education: - Bachelors/University degree, Masters degree preferred,
Posted 1 month ago
7.0 - 11.0 years
0 Lacs
hyderabad, telangana
On-site
You will be joining OSP India - Hyderabad Private Limited, now a part of Otto Group one.O, a high-performance partner focused on strategy consulting and technology for the Otto Group. This transition aligns the company with global teams in Germany, Spain, and Taiwan, enhancing collaboration moving forward. Your role and job security will remain unaffected by this rebranding, ensuring continuity with the company culture. As a Fullstack Developer with 7-10 years of professional experience, you will be responsible for building scalable, high-quality software solutions. You must showcase proficiency in both frontend and backend technologies, with a track record of utilizing AI-assisted coding tools and low-code platforms to expedite development without compromising quality. Your role will involve developing prototypes, proof-of-concepts, and production-grade applications, leveraging fullstack technologies and AI coding tools. Collaboration with cross-functional teams, integration of APIs and third-party services, and maintenance of code quality will be key aspects of your responsibilities. To excel in this role, you must possess strong hands-on experience with frontend frameworks like React, Angular, or Vue.js, and backend development skills in Node.js, Python, Java, or C#. Experience with AI coding tools such as GitHub Copilot, ChatGPT, AskCodi, and exposure to low-code/no-code platforms like Microsoft Power Apps, Bubble, Retool, or similar tools are essential. Your ability to assess new technologies critically, problem-solving skills, and effective communication with both technical and non-technical stakeholders will be crucial. Mentoring junior developers and fostering a culture of continuous learning are valued traits for this role. While experience with DevOps, cloud platforms (AWS, Azure, GCP), design systems, and Agile/Scrum environments are advantageous, they are not mandatory requirements. The company offers benefits such as flexible working hours, comprehensive medical insurance, and a hybrid work model combining in-office collaboration and remote work opportunities to support work-life balance and employee well-being.,
Posted 1 month ago
3.0 - 7.0 years
0 Lacs
pune, maharashtra
On-site
Job Description As a Big Data Engineer with Capco, a Wipro company, you will play a crucial role in leveraging your skills to drive innovative solutions for our clients in the banking, financial, and energy sectors. Your expertise in messaging technologies such as Apache Kafka, programming languages like Scala and Python, and tools like NiFi and AirFlow will be essential in designing and implementing intuitive and responsive user interfaces that enhance data analysis capabilities. You will be responsible for writing efficient queries using Jupyter Notebook, optimizing Spark performance, and ensuring the reliability and scalability of distributed systems. Your strong understanding of cloud architecture, SQL, and software engineering concepts will enable you to deliver high-quality code that meets performance standards. At Capco, we value diversity, inclusivity, and creativity, and believe that different perspectives contribute to our competitive advantage. With no forced hierarchy, you will have the opportunity to advance your career and make a significant impact on our clients" businesses. Join us at Capco and be part of a dynamic team that is driving transformation in the energy and financial services industries.,
Posted 1 month ago
10.0 - 14.0 years
0 Lacs
pune, maharashtra
On-site
At BMC, trust is not just a word - it's a way of life! We are an award-winning, equal opportunity, culturally diverse organization that thrives on giving back to the community. Our work environment is designed to help you balance your priorities so that you can bring your best self to work every day. We celebrate your accomplishments and create a supportive and inspiring atmosphere where innovation flourishes. As a Lead Full-Stack Developer at BMC, you will be responsible for designing and implementing robust, scalable features across the enterprise platform for Multi-Cloud Services. You will collaborate with product managers and architects to translate functional requirements into scalable technical designs. Your role will involve driving planning, feature breakdown, and sprint execution to ensure timely and quality deliverables using CI/CD practices. You will develop enterprise-grade full-stack solutions utilizing Java, Spring Boot, AngularJS, Node.js, and RESTful services. In this role, you will provide technical guidance, mentorship, and code reviews to junior team members and peers. You will actively participate in architectural reviews, promoting best practices and reusable designs. Your focus will be on ensuring that solutions are scalable, performant, and aligned with SaaS product goals. Additionally, you will continuously research emerging technologies and tools, contributing ideas for innovation and improvement. To excel in this role, you should possess 10-14 years of experience in full-stack software development with a strong foundation in Java and/or Node.js. Hands-on experience with Spring Boot, AngularJS (5.x+), Advanced JavaScript, and REST APIs is essential. A strong understanding of OOP, MVC, scalable architecture, multithreading, and high-performance design is also required. Proficiency in integrating with web services, build tools, version control systems, and Agile practices is expected. While experience with web packaging tools, advanced testing frameworks, containerization, cloud platforms, and DevOps concepts are considered beneficial, our team is dedicated to helping you develop these skills. BMC values its employees and fosters a culture of collaboration and innovation. We encourage candidates from diverse backgrounds to apply and bring their unique perspectives to our team. If you are excited about the opportunity to work with BMC and believe you can contribute to our success, we encourage you to apply. Your authentic self is welcome here, and we are committed to ensuring fair and transparent compensation practices for all our employees.,
Posted 1 month ago
5.0 - 8.0 years
7 - 10 Lacs
Noida
Work from Office
Key Responsibilities : - Design, develop, and maintain scalable and secure backend applications using Java and Spring Boot 3.2 - Develop RESTful APIs and integrate with third-party services and internal systems - Work on Spring Batch for handling scheduled or high-volume background jobs - Design and develop microservices and ensure inter-service communication and data consistency - Build and maintain an intuitive portal/dashboard for internal/external stakeholders with appropriate backend logic - Optimize queries and work closely with the Oracle DB, writing complex SQL queries and performance tuning - Collaborate with front-end developers, QA engineers, and product managers to deliver high-quality solutions - Conduct code reviews and provide guidance to junior developers - Troubleshoot and debug application issues, perform root cause analysis, and implement effective solutions - Write clear, maintainable, and testable code with appropriate unit and integration tests - Take ownership of features from design to deployment and support - Participate in Agile ceremonies and contribute to sprint planning and retrospectives Required Skills and Experience : - 57 years of proven experience as a Java backend developer - Strong programming skills in Java with a deep understanding of object-oriented programming - Extensive experience in Spring Boot 3.2, Spring Batch, and Spring Job Scheduling - Proficiency in developing and consuming RESTful APIs - Hands-on experience with Microservices architecture and distributed systems - Solid experience in working with Oracle Database and writing optimized SQL queries - Experience in integrating backend services with front-end portals or dashboards - Strong understanding of software engineering best practices including coding standards, code reviews, source control management, build processes, testing, and operations - Excellent analytical and problem-solving skills must be able to analyze complex business requirements and build logical solutions - Familiarity with tools like Git, Maven/Gradle, Jenkins, and containerization platforms (Docker/Kubernetes) is a plus - Good communication and collaboration skills to work effectively in a team environment Nice to Have : - Experience in performance tuning and application profiling - Exposure to CI/CD pipelines and DevOps practices - Knowledge of front-end technologies (basic level) for better integration with backend Educational Qualification : - Bachelors or Masters degree in Computer Science, Engineering, or related field
Posted 1 month ago
5.0 - 10.0 years
3 - 6 Lacs
Noida
Work from Office
We are seeking a highly skilled Kafka Integration Specialist to join our team. The ideal candidate will have extensive experience in designing, developing, and integrating Apache Kafka solutions to support real-time data streaming and distributed systems. Key Responsibilities : - Design, implement, and maintain Kafka-based data pipelines. - Develop integration solutions using Kafka Connect, Kafka Streams, and other related technologies. - Manage Kafka clusters, ensuring high availability, scalability, and performance. - Collaborate with cross-functional teams to understand integration requirements and deliver robust solutions. - Implement best practices for data streaming, including message serialization, partitioning, and replication. - Monitor and troubleshoot Kafka performance, latency, and security issues. - Ensure data integrity and implement failover strategies for critical data pipelines. Required Skills : - Strong experience in Apache Kafka (Kafka Streams, Kafka Connect). - Proficiency in programming languages like Java, Python, or Scala. - Experience with distributed systems and data streaming concepts. - Familiarity with Zookeeper, Confluent Kafka, and Kafka Broker configurations. - Expertise in creating and managing topics, partitions, and consumer groups. - Hands-on experience with integration tools such as REST APIs, MQ, or ESB. - Knowledge of cloud platforms like AWS, Azure, or GCP for Kafka deployment. Nice to Have : - Experience with monitoring tools like Prometheus, Grafana, or Datadog. - Exposure to DevOps practices, CI/CD pipelines, and infrastructure automation. - Knowledge of data serialization formats like Avro, Protobuf, or JSON. Qualifications : - Bachelor's degree in Computer Science, Information Technology, or related field. - 4+ years of hands-on experience in Kafka integration projects.
Posted 1 month ago
5.0 - 10.0 years
3 - 6 Lacs
Pune
Work from Office
We are seeking a highly skilled Kafka Integration Specialist to join our team. The ideal candidate will have extensive experience in designing, developing, and integrating Apache Kafka solutions to support real-time data streaming and distributed systems. Key Responsibilities : - Design, implement, and maintain Kafka-based data pipelines. - Develop integration solutions using Kafka Connect, Kafka Streams, and other related technologies. - Manage Kafka clusters, ensuring high availability, scalability, and performance. - Collaborate with cross-functional teams to understand integration requirements and deliver robust solutions. - Implement best practices for data streaming, including message serialization, partitioning, and replication. - Monitor and troubleshoot Kafka performance, latency, and security issues. - Ensure data integrity and implement failover strategies for critical data pipelines. Required Skills : - Strong experience in Apache Kafka (Kafka Streams, Kafka Connect). - Proficiency in programming languages like Java, Python, or Scala. - Experience with distributed systems and data streaming concepts. - Familiarity with Zookeeper, Confluent Kafka, and Kafka Broker configurations. - Expertise in creating and managing topics, partitions, and consumer groups. - Hands-on experience with integration tools such as REST APIs, MQ, or ESB. - Knowledge of cloud platforms like AWS, Azure, or GCP for Kafka deployment. Nice to Have : - Experience with monitoring tools like Prometheus, Grafana, or Datadog. - Exposure to DevOps practices, CI/CD pipelines, and infrastructure automation. - Knowledge of data serialization formats like Avro, Protobuf, or JSON. Qualifications : - Bachelor's degree in Computer Science, Information Technology, or related field. - 4+ years of hands-on experience in Kafka integration projects.
Posted 1 month ago
4.0 - 8.0 years
0 Lacs
ahmedabad, gujarat
On-site
Our client is seeking an enthusiastic, self-motivated, and hands-on senior web application engineer with expertise in React/Redux and strong troubleshooting skills to join the engineering team. The ideal candidate should have experience in a fast-paced India/US product start-up or a product engineering services company in a senior engineer role, focusing on building and managing high-performance real-time systems. A genuine passion for engineering, along with accuracy and efficiency, excellent organizational skills, and the ability to work both independently and as part of a team are essential qualities for this role. In this position, you will be responsible for architecting and developing web applications, ensuring high code quality and development standards, creating a new UI component library, and conducting unit tests to ensure robustness and reliability. Additionally, you will be involved in managing releases for internal stakeholders, developing and maintaining internal tooling and reporting infrastructure, and collaborating with the product team to translate business requirements into actionable tasks for the engineering team. Working closely with other engineers, you will participate in planning, building, and testing new features. To excel in this role, candidates should hold a BA/BS/Masters degree in Computer Science or a related field, with a minimum of 4 years of experience as a web application engineer. Proficiency in Javascript and the MERN stack is required, along with at least 2 years of solid experience with Node.js. Candidates should also have a strong background in websockets, observable-based programming, React, Redux, and TypeScript, with exposure to distributed systems, microservices, and databases such as MySQL. Experience in creating client-side asynchronous web applications using Ajax is essential, while knowledge of building a streaming analytics system is considered a significant advantage. Familiarity with CI/CD infrastructure, build tools like Webpack or Rollup, and cloud platforms such as Microsoft Azure, GCP, or AWS is beneficial. Experience with MongoDB and Cassandra is also a plus. In addition to technical skills, successful candidates should possess soft skills such as the ability to work independently on projects from inception, strong verbal and written communication skills for effective problem-solving and collaboration with India & US-based team members, a sense of urgency, and a commitment to accuracy and timeliness. The role requires the capacity to work effectively in high-pressure situations, manage multiple tasks simultaneously, and demonstrate superior issue resolution skills. Familiarity with other Javascript frameworks like Vue.JS, Angular/NgRx/RxJs is advantageous for the role.,
Posted 1 month ago
5.0 - 9.0 years
0 Lacs
karnataka
On-site
You will lead the development of high-performance backend services using Java and Spring Boot, designing and building reliable and scalable REST APIs and microservices. Taking ownership of features and system components throughout the software lifecycle will be your responsibility. You will also design and implement CI/CD workflows using tools like Jenkins or GitHub Actions, contributing to architectural decisions, code reviews, and system optimizations. Your expertise in Java and advanced experience with the Spring Boot framework will be essential, along with proven experience in building and scaling REST APIs and microservices. Hands-on experience with CI/CD automation and DevOps tools is required, as well as working knowledge of distributed systems, cloud platforms, and Kafka. A strong understanding of system design, performance optimization, and best coding practices is crucial for this role. Proficiency in Docker and Kubernetes for containerized deployments, exposure to NoSQL databases such as MongoDB and Cassandra, and experience with configuration server management and dynamic config updates are nice-to-have skills. Familiarity with monitoring and logging tools like Prometheus, ELK Stack, or others, along with awareness of cloud security standards, observability, and incident management will be beneficial. This is a full-time position with benefits including Provident Fund. The work schedule is during the day shift, and the job requires at least 5 years of experience in Java Developer, Docker and Kubernetes, NoSQL databases MongoDB, Cassandra, Kafka, Spring Boot framework, Jenkins, GitHub, REST APIs, system design, cloud architectures and microservices, monitoring and logging tools, and awareness of cloud security. Work location is in person.,
Posted 1 month ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
40175 Jobs | Dublin
Wipro
19626 Jobs | Bengaluru
Accenture in India
17497 Jobs | Dublin 2
EY
16057 Jobs | London
Uplers
11768 Jobs | Ahmedabad
Amazon
10704 Jobs | Seattle,WA
Oracle
9513 Jobs | Redwood City
IBM
9439 Jobs | Armonk
Bajaj Finserv
9311 Jobs |
Accenture services Pvt Ltd
8745 Jobs |