Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
12.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
About The Role Grade Level (for internal use): 12 The Role: Foreseer AI – Associate Director The Team: The Foreseer team delivers digital transformation solutions at SPGMI for information extraction from structures and semi structured documents and websites. Foreseer is a human in the loop platform that combines latest AI/ML advances with a state-of-the-art UI for delivering multiple projects, all powered by a core distributed, cloud native, auto scalable framework. Team comprises of experts in Java and Python language and ML engineers. Responsibilities Include Support and foster a quality-first, agile culture that is built on partnership, trust and sharing Design, develop and maintain functionalities to create new solutions on the platform. Learning and understanding all aspects of the framework and the project deliverables. Be technically deep and provide input on architecture, design and code. Be responsible for driving technical design, architecture and implementation of new features and iterations of your project. Collaborate closely with business and PM stakeholders in roadmap planning and implementation efforts and ensure technical milestones align with business requirements. Implement security measures and compliance standards to protect sensitive data and ensure adherence to industry regulations. Maintain comprehensive documentation of platform, processes and procedures for reference and knowledge sharing. Ensure the use of standards, governance and best practices in the industry to deliver high quality scalable solutions. Ability to adapt communication styles to various audiences, from engineers to business stakeholders and executive leadership. Strategic thinker and influencer with demonstrated technical and business acumen and problem-solving skills. Experience & Qualifications BS or MS degree in Computer Science, AI/ML, Data Science or related field. Overall experience should be 12+ Years. 8+ years hands on experience in AI/ML, Data Science or Technology Strategy 4+ years of experience in leadership roles. Strong understanding of AI technologies (e.g. Genrative AI, NLP, Computer Vision) and their business applications. Background in Consulting or Corporate strategy with focus on AI. Familiarity with AI platforms (e.g., TensorFlow, PyTorch) and cloud AI services (AWS, GCP, Azure). Hands on experience in leveraging language models for information retrieval. Experience in different data base technologies (like RDBMS, NOSQL) Experience in Containerization, Kubernetes, cloud platforms, CI/CD, and deployments through CI/CD pipelines, and AWS services like S3, EKS, EC2 etc. Proficiency in the development environment, including IDE, web & application server, GIT, Azure DevOps, unit-testing tool and defect management tools Nice To Have Skills Distributed systems programming. Hands on experience in Java and Front End technologies. Hands on experience with Elastic/Redis search. Experience in Agile and Scrum methodologies What’s In It For You? Our Purpose Progress is not a self-starter. It requires a catalyst to be set in motion. Information, imagination, people, technology–the right combination can unlock possibility and change the world. Our world is in transition and getting more complex by the day. We push past expected observations and seek out new levels of understanding so that we can help companies, governments and individuals make an impact on tomorrow. At S&P Global we transform data into Essential Intelligence®, pinpointing risks and opening possibilities. We Accelerate Progress. Our People We're more than 35,000 strong worldwide—so we're able to understand nuances while having a broad perspective. Our team is driven by curiosity and a shared belief that Essential Intelligence can help build a more prosperous future for us all. From finding new ways to measure sustainability to analyzing energy transition across the supply chain to building workflow solutions that make it easy to tap into insight and apply it. We are changing the way people see things and empowering them to make an impact on the world we live in. We’re committed to a more equitable future and to helping our customers find new, sustainable ways of doing business. We’re constantly seeking new solutions that have progress in mind. Join us and help create the critical insights that truly make a difference. Our Values Integrity, Discovery, Partnership At S&P Global, we focus on Powering Global Markets. Throughout our history, the world's leading organizations have relied on us for the Essential Intelligence they need to make confident decisions about the road ahead. We start with a foundation of integrity in all we do, bring a spirit of discovery to our work, and collaborate in close partnership with each other and our customers to achieve shared goals. Benefits We take care of you, so you can take care of business. We care about our people. That’s why we provide everything you—and your career—need to thrive at S&P Global. Our Benefits Include Health & Wellness: Health care coverage designed for the mind and body. Flexible Downtime: Generous time off helps keep you energized for your time on. Continuous Learning: Access a wealth of resources to grow your career and learn valuable new skills. Invest in Your Future: Secure your financial future through competitive pay, retirement planning, a continuing education program with a company-matched student loan contribution, and financial wellness programs. Family Friendly Perks: It’s not just about you. S&P Global has perks for your partners and little ones, too, with some best-in class benefits for families. Beyond the Basics: From retail discounts to referral incentive awards—small perks can make a big difference. For more information on benefits by country visit: https://spgbenefits.com/benefit-summaries Global Hiring And Opportunity At S&P Global At S&P Global, we are committed to fostering a connected and engaged workplace where all individuals have access to opportunities based on their skills, experience, and contributions. Our hiring practices emphasize fairness, transparency, and merit, ensuring that we attract and retain top talent. By valuing different perspectives and promoting a culture of respect and collaboration, we drive innovation and power global markets. Recruitment Fraud Alert If you receive an email from a spglobalind.com domain or any other regionally based domains, it is a scam and should be reported to reportfraud@spglobal.com. S&P Global never requires any candidate to pay money for job applications, interviews, offer letters, “pre-employment training” or for equipment/delivery of equipment. Stay informed and protect yourself from recruitment fraud by reviewing our guidelines, fraudulent domains, and how to report suspicious activity here. Equal Opportunity Employer S&P Global is an equal opportunity employer and all qualified candidates will receive consideration for employment without regard to race/ethnicity, color, religion, sex, sexual orientation, gender identity, national origin, age, disability, marital status, military veteran status, unemployment status, or any other status protected by law. Only electronic job submissions will be considered for employment. If you need an accommodation during the application process due to a disability, please send an email to: EEO.Compliance@spglobal.com and your request will be forwarded to the appropriate person. US Candidates Only: The EEO is the Law Poster http://www.dol.gov/ofccp/regs/compliance/posters/pdf/eeopost.pdf describes discrimination protections under federal law. Pay Transparency Nondiscrimination Provision - https://www.dol.gov/sites/dolgov/files/ofccp/pdf/pay-transp_%20English_formattedESQA508c.pdf 20 - Professional (EEO-2 Job Categories-United States of America), IFTECH202.2 - Middle Professional Tier II (EEO Job Group), SWP Priority – Ratings - (Strategic Workforce Planning) Job ID: 315895 Posted On: 2025-06-23 Location: Hyderabad, Telangana, India
Posted 6 days ago
5.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
About the Role We are seeking an experienced Senior Java Developer with expertise in data integration and distributed systems. The ideal candidate will have a strong background in building scalable, secure microservices and managing data pipelines using modern cloud technologies. Core Responsibilities Design and develop robust data integration solutions/enterprise integration patterns using Apache Kafka and Apache Camel Implement message routing, transformation, and mediation using Apache Camel DSL Build and maintain secure RESTful web services using Spring Framework Implement Change Data Capture (CDC) patterns for real-time data synchronization Design and maintain Avro schemas for data serialization and evolution Implement and optimize microservices architecture patterns Create and maintain CI/CD pipelines for automated testing and deployment Write Infrastructure as Code using Terraform for AWS resource provisioning Containerize applications using Docker and manage deployments on Amazon EKS Collaborate with cross-functional teams to design and implement scalable solutions Mentor junior developers and contribute to technical decision-making Required Technical Skills 5+ years of experience in Java development with strong proficiency in Java 8+ Strong expertise with Apache Camel: o Enterprise Integration Patterns implementation o Camel DSL (Java, XML, and YAML) o Component development and customization o Route testing and debugging o Performance tuning and optimization o Integration with Spring Boot o Message transformation and routing o Error handling and monitoring Deep understanding of Spring Framework (Spring Boot, Spring Security, Spring Cloud) Extensive experience with Apache Kafka for building event-driven architectures Experience in Change Data Capture (CDC) tools and patterns Proficiency with Schema Registry (Apache Avro) and data serialization Strong knowledge of RESTful API design and implementation Hands-on experience with: o AWS services and cloud architecture patterns o Infrastructure as Code using Terraform o Docker containerization and Kubernetes (EKS) o CI/CD tools and methodologies o Git version control and branching strategies Preferred Qualifications Experience with: Kafka Streams and KSQL Spring libraries AWS service mesh implementations Monitoring and observability tools, metrics collection and visualization (e.g., Prometheus, Grafana) Test-driven development (TDD) Confluent Platform and its components Using code quality measurement and improvement tools like SonarQube, and peer code reviewing using GitHub PR flows
Posted 6 days ago
5.0 - 8.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
About Role: Position summary: A DevOps Engineer will be responsible for building, configuring, monitoring, and troubleshooting cloud native applications on Azure Cloud Platform. All customer-facing and internal cloud platform elements will be required to be monitored to maintain service levels in line with customer SLAs and KPI’s. You will be responsible for? Adapting to existing cloud infrastructure and proposing alternatives where required Understand requirements, processes & strategize scaling applications to support new launches Identifying and addressing risks to applications while maintaining application health Coordinating with Technical Teams to identify and handle root cause of issues on cloud Proposing cost saving and optimization measures for applications deployed on cloud platforms Building Docker images, creating Kubernetes manifest files and Helm Charts Platform monitoring, troubleshooting and resolution of issues on Azure (L2 support) Adherence to process compliance & ensuring platform reliability Competencies: The position must have the following competencies: Strong Hands-on experience on Azure Platform & Azure DevOps (VSTS) Source Code Management (Git & GitHub) Expertise in Containerization using Docker & orchestration using Kubernetes Good experience using automation tools like Azure ARM templates & PowerShell Monitoring tools such as Prometheus, Loki, Grafana Strong knowledge in Linux Shell scripting and Linux OS commands Helm Package Manager and Helm charts Good understanding of Azure CLI, networking and Security Good knowledge on Azure PaaS components and customization Team Player with understanding of Agile development processes Excellent interpersonal, verbal and written communication skills What should you have? B.E/B.Tech/M.E/M.Tech/MCA (IT, CSE, ECE) with Good Communication Skills 5- 8 year of experience in Cloud and DevOps, preferably Microsoft Azure Experience in setting up build and deployment pipelines (Azure DevOps) Experience in K8s, AKS, Docker and Helm, ARM templates Strong Knowledge on Git, Linux OS, shell scripting & PowerShell Prior working experience on monitoring tools like Prometheus, Grafana, Loki and Promtail will be added advantage Strong understanding of IaaS, PaaS and SaaS cloud models. Azure Certified professionals are preferred Why join us? Impactful Work: Play a pivotal role in safeguarding Tanla's assets, data, and reputation in the industry. Tremendous Growth Opportunities: Be part of a rapidly growing company in the telecom and CPaaS space, with opportunities for professional development. Innovative Environment: Work alongside a world-class team in a challenging and fun environment, where innovation is celebrated. Tanla is an equal opportunity employer. We champion diversity and are committed to creating an inclusive environment for all employee. www.tanla.com
Posted 6 days ago
2.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Job Title: IT Administrator with Networking & Server Administration Location : Hyderabad Experience : 6 months – 2 years Job Type: Paid Internship About Us: Instaresz Business Services Pvt Ltd is a forward-thinking, fast-growing technology company that thrives on innovative solutions. We are currently looking for an experienced IT Administrator who will take responsibility for managing and maintaining the network infrastructure, servers, and systems while ensuring smooth day-to-day IT operations across the organization. Key Responsibilities: Set up, configure, and maintain LAN/WAN networks, routers, switches, firewalls, and VPNs. Administer Windows/Linux servers, Active Directory, DNS, DHCP, and user access controls. Manage software and OS package installations using tools like apt, yum, dnf, and rpm. Monitor and troubleshoot network and system performance issues. Maintain web, file, mail, and database servers (Apache, Nginx, Postfix, MySQL, etc.). Implement and monitor IT security measures including firewalls, antivirus, and access policies. Perform system backups, restore processes, and support disaster recovery plans. Support virtualization platforms (VMware, Hyper-V) and assist with basic cloud infrastructure (AWS, Azure). Automate tasks using PowerShell or Bash scripting. Document IT procedures, configurations, and network diagrams. Required Skills & Qualifications: Proven Experience in IT system administration, networking, and server management. Hands-on Knowledge of networking protocols, IP addressing, subnetting, and VPNs. Experience with network devices such as routers, switches, and firewalls. Proficient in Windows Server (Active Directory, Group Policies, DNS, DHCP) and Linux administration (Ubuntu, CentOS, RHEL). In-depth knowledge of server administration , including web servers (Apache, Nginx), databases (MySQL, PostgreSQL), and mail servers (Postfix, Exchange). Experience with package management tools (apt, yum, dnf, rpm). Familiarity with cloud platforms (AWS, Azure) and virtualization tools (VMware, Hyper-V). Strong understanding of IT security practices , including firewalls, antivirus, VPNs, and access management. Scripting skills for automation (PowerShell, Bash). Excellent problem-solving and troubleshooting abilities. Preferred Certifications: CompTIA Network+ CompTIA Security+ Microsoft Certified: Windows Server / Azure Administrator Cisco Certified Network Associate (CCNA) Red Hat Certified System Administrator (RHCSA) ITIL Foundation (For IT Service Management) Additional Skills (Good to Have): Experience with containerization technologies (Docker, Kubernetes). Knowledge of Version Control Systems (Git). Why Join Us: Competitive salary and performance-based incentives Dynamic and collaborative work environment Opportunities for learning and growth Exposure to cutting-edge technologies and industry trends
Posted 6 days ago
2.0 - 4.0 years
0 Lacs
India
Remote
Who We Are At Twilio, we’re shaping the future of communications, all from the comfort of our homes. We deliver innovative solutions to hundreds of thousands of businesses and empower millions of developers worldwide to craft personalized customer experiences. Our dedication to remote-first work, and strong culture of connection and global inclusion means that no matter your location, you’re part of a vibrant team with diverse experiences making a global impact each day. As we continue to revolutionize how the world interacts, we’re acquiring new skills and experiences that make work feel truly rewarding. Your career at Twilio is in your hands. See yourself at Twilio Join the team as Twilio’s next Software Engineer (L2) , Data Platform, Bangalore, India About The Job This position plays a crucial role in building a scalable, reliable, and efficient data platform that enables users to extract valuable insights from vast amounts of distributed data. As part of the team, you will contribute to creating the data backbone that supports key strategic decisions for our company and customers. You will work closely with experienced engineers, learning best practices in software development and data systems, and contributing to projects that directly impact the success of our platform. Responsibilities In this role, you’ll: Design, develop, test, and deploy features and services in AWS cloud environments. Ensure the reliability of your services, including responding to incidents, participating in on-call rotations, and conducting post-incident RCAs. Meet the Definition of Done for all software you build, while continuously improving and reducing technical debt. Write and maintain automated unit and integration tests within a continuous integration and delivery (CI/CD) environment. Manage tasks and collaborate effectively using GitHub, Jira, and the team’s build and deploy systems. Actively collaborate with Product and other teams to influence and improve the service or platform. Participate in design discussions, code reviews, backlog grooming, and Agile ceremonies, ensuring sprint commitments are met. Troubleshoot and resolve issues independently or with your team in your domain. Take responsibility for documenting the projects and processes you work on. Quickly acquire new technical skills and adapt to the fast-paced, disruptive industry landscape. Troubleshoot and debug to ensure smooth application performance. Qualifications Twilio values diverse experiences in other industries, and we encourage everyone who meets the required qualifications to apply. While having “desired” qualifications make for a strong candidate, we encourage applicants with alternative experiences to also apply. If your career is just starting or hasn't followed a traditional path, don't let that stop you from considering Twilio. We are always looking for people who will bring something new to the table! Required: 2-4 years of proven experience as a Java Developer, with an in-depth knowledge of the Java programming language and its ecosystem. Knowledge of Python programming, with the ability to write clean, maintainable, and efficient code, and utilize its libraries and frameworks effectively. Strong experience building resilient, high-throughput microservices and distributed systems, with a deep understanding of the microservices architecture. Hands-on experience with SQL/NoSQL databases, and caching mechanisms (e.g., Redis, Memcached). Proficiency in AWS, GCP, or other public cloud platforms, with knowledge of cloud-native design patterns and services. Containerization expertise with Docker and Kubernetes, including orchestration and scaling of services. Familiarity with modern automated testing frameworks and CI/CD pipelines, ensuring high-quality and seamless software delivery. A strong advocate for best engineering practices, including test-driven development (TDD), security-first approaches, and automation throughout the development lifecycle. Ability to analyze and troubleshoot complex production issues, with a proactive approach to problem-solving. Experience with Agile methodologies such as Scrum and a deep understanding of iterative development and continuous deployment strategies. Excellent communication skills, with the ability to convey technical concepts clearly in both written and verbal formats. Desired Experience with observability tools (e.g., Prometheus, Grafana, Datadog) for monitoring microservices and distributed systems. Familiarity with DevOps principles and infrastructure as code (IaC) using tools like Terraform or CloudFormation. Knowledge of API gateway technologies and message brokers (e.g., Kafka) for building scalable, event-driven systems. Location This role will be remote, and based in India - Karnataka, Tamil Nadu, Telangana State, Maharashtra and New Delhi. Travel We prioritize connection and opportunities to build relationships with our customers and each other. For this role, you may be required to travel occasionally to participate in project or team in-person meetings. What We Offer Working at Twilio offers many benefits, including competitive pay, generous time off, ample parental and wellness leave, healthcare, a retirement savings program, and much more. Offerings vary by location. Twilio thinks big. Do you? We like to solve problems, take initiative, pitch in when needed, and are always up for trying new things. That's why we seek out colleagues who embody our values — something we call Twilio Magic. Additionally, we empower employees to build positive change in their communities by supporting their volunteering and donation efforts. So, if you're ready to unleash your full potential, do your best work, and be the best version of yourself, apply now! If this role isn't what you're looking for, please consider other open positions. Twilio is proud to be an equal opportunity employer. We do not discriminate based upon race, religion, color, national origin, sex (including pregnancy, childbirth, reproductive health decisions, or related medical conditions), sexual orientation, gender identity, gender expression, age, status as a protected veteran, status as an individual with a disability, genetic information, political views or activity, or other applicable legally protected characteristics. We also consider qualified applicants with criminal histories, consistent with applicable federal, state and local law. Qualified applicants with arrest or conviction records will be considered for employment in accordance with the Los Angeles County Fair Chance Ordinance for Employers and the California Fair Chance Act. Additionally, Twilio participates in the E-Verify program in certain locations, as required by law.
Posted 6 days ago
8.0 years
0 Lacs
Andhra Pradesh, India
On-site
8 years of hands-on experience in Thought Machine Vault, Kubernetes, Terraform, GCP/AWS, PostgreSQL, CI/CD REST APIs, Docker, Kubernetes, Microservices Architect and manage enterprise-level databases with 24/7 availability Lead efforts on optimization, backup, and disaster recovery planning Design and manage scalable CI/CD pipelines for cloud-native apps Automate infrastructure using Terraform/CloudFormation Implement container orchestration using Kubernetes and ECS Ensure cloud security, compliance, and cost optimization Monitor performance and implement high-availability setups Collaborate with dev, QA, and security teams; drive architecture decisions Mentor team members and contribute to DevOps best practicesIntegrate microservices with Kafka for real-time data streaming and event-driven architecture. Troubleshoot and resolve issues in a timely manner, ensuring optimal system performance. Keep up-to-date with industry trends and advancements, incorporating best practices into our development processes. Bachelor's or Master's degree in Computer Science or related field. Solid understanding of AWS services, including but not limited to EC2, Lambda, S3, and RDS. Experience with Kafka for building event-driven architectures. Strong database skills, including SQL and NoSQL databases. Familiarity with containerization and orchestration tools (Docker, Kubernetes). Excellent problem-solving and troubleshooting skills. Good to have TM Vault core banking knowledge, Strong communication and collaboration skills.
Posted 6 days ago
8.0 years
0 Lacs
Andhra Pradesh, India
On-site
8 years of hands-on experience in AWS PostgreSQL, Oracle, MySQL, MongoDB, Performance Tuning, Backup, Replication REST APIs, Docker, Kubernetes, Microservices Architect and manage enterprise-level databases with 24/7 availability Lead efforts on optimization, backup, and disaster recovery planning Design and manage scalable CI/CD pipelines for cloud-native apps Automate infrastructure using Terraform/CloudFormation Implement container orchestration using Kubernetes and ECS Ensure cloud security, compliance, and cost optimization Monitor performance and implement high-availability setups Collaborate with dev, QA, and security teams; drive architecture decisions Troubleshoot and resolve issues in a timely manner, ensuring optimal system performance. Keep up-to-date with industry trends and advancements, incorporating best practices into our development processes. Bachelor's or Master's degree in Computer Science or related field. Solid understanding of AWS services, including but not limited to EC2, Lambda, S3, and RDS. Experience with Kafka for building event-driven architectures. Strong database skills, including SQL and NoSQL databases. Familiarity with containerization and orchestration tools (Docker, Kubernetes). Excellent problem-solving and troubleshooting skills. Good to have TM Vault core banking knowledge, Strong communication and collaboration skills.
Posted 6 days ago
8.0 years
0 Lacs
Andhra Pradesh, India
On-site
8 years of hands-on experience in Thought Machine Vault, Kubernetes, Terraform, GCP/AWS, PostgreSQL, CI/CD REST APIs, Docker, Kubernetes, Microservices Architect and manage enterprise-level databases with 24/7 availability Lead efforts on optimization, backup, and disaster recovery planning Ensure compliance, implement monitoring and automation Guide developers on schema design and query optimization Conduct DB health audits and capacity planningCollaborate with cross-functional teams to define, design, and ship new features. Work on the entire software development lifecycle, from concept and design to testing and deployment. Implement and maintain AWS cloud-based solutions, ensuring high performance, security, and scalability. Integrate microservices with Kafka for real-time data streaming and event-driven architecture. Troubleshoot and resolve issues in a timely manner, ensuring optimal system performance. Keep up-to-date with industry trends and advancements, incorporating best practices into our development processes. Bachelor's or master's degree in computer science or related field. Solid understanding of AWS services, including but not limited to EC2, Lambda, S3, and RDS Experience with Kafka for building event-driven architectures. Strong database skills, including SQL and NoSQL databases. Familiarity with containerization and orchestration tools (Docker, Kubernetes). Excellent problem-solving and troubleshooting skills. Good to have TM Vault core banking knowledge Strong communication and collaboration skills
Posted 6 days ago
8.0 years
0 Lacs
Andhra Pradesh, India
On-site
8 years of hands-on experience in AWS, Kubernetes, Prometheus, Cloudwatch,Splunk.Datadog Terraform, Scripting (Python/Go), Incident Management Architect and manage enterprise-level databases with 24/7 availability Lead efforts on optimization, backup, and disaster recovery planning Design and manage scalable CI/CD pipelines for cloud-native apps Automate infrastructure using Terraform/CloudFormation Implement container orchestration using Kubernetes and ECS Ensure cloud security, compliance, and cost optimization Monitor performance and implement high-availability setups Collaborate with dev, QA, and security teams; drive architecture decisions Troubleshoot and resolve issues in a timely manner, ensuring optimal system performance. Keep up-to-date with industry trends and advancements, incorporating best practices into our development processes. Bachelor's or Master's degree in Computer Science or related field. Solid understanding of AWS services, including but not limited to EC2, Lambda, S3, and RDS. Experience with Kafka for building event-driven architectures. Strong database skills, including SQL and NoSQL databases. Familiarity with containerization and orchestration tools (Docker, Kubernetes). Excellent problem-solving and troubleshooting skills. Good to have TM Vault core banking knowledge, Strong communication and collaboration skills.
Posted 6 days ago
8.0 years
0 Lacs
Andhra Pradesh, India
On-site
8 years of hands-on experience in Thought Machine Vault, Kubernetes, Terraform, GCP/AWS, PostgreSQL, CI/CD REST APIs, Docker, Kubernetes, Microservices Architect and manage enterprise-level databases with 24/7 availability Lead efforts on optimization, backup, and disaster recovery planning Ensure compliance, implement monitoring and automation Guide developers on schema design and query optimization Conduct DB health audits and capacity planningCollaborate with cross-functional teams to define, design, and ship new features. Work on the entire software development lifecycle, from concept and design to testing and deployment. Implement and maintain AWS cloud-based solutions, ensuring high performance, security, and scalability. Integrate microservices with Kafka for real-time data streaming and event-driven architecture. Troubleshoot and resolve issues in a timely manner, ensuring optimal system performance. Keep up-to-date with industry trends and advancements, incorporating best practices into our development processes. Bachelor's or master's degree in computer science or related field. Solid understanding of AWS services, including but not limited to EC2, Lambda, S3, and RDS Experience with Kafka for building event-driven architectures. Strong database skills, including SQL and NoSQL databases. Familiarity with containerization and orchestration tools (Docker, Kubernetes). Excellent problem-solving and troubleshooting skills. Good to have TM Vault core banking knowledge Strong communication and collaboration skills
Posted 6 days ago
5.0 years
0 Lacs
Ahmedabad, Gujarat, India
On-site
Job purpose: Design, develop, and deploy end-to-end AI/ML systems, focusing on large language models (LLMs), prompt engineering, and scalable system architecture. Leverage technologies such as Java/Node.js/NET to build robust, high-performance solutions that integrate with enterprise systems. Who You Are: Education: Bachelor’s or Master’s degree in Computer Science, Engineering, or a related field. PhD is a plus. 5+ years of experience in AI/ML development, with at least 2 years working on LLMs or NLP. Proven expertise in end-to-end system design and deployment of production-grade AI systems. Hands-on experience with Java/Node.js/.NET for backend development. Proficiency in Python and ML frameworks (TensorFlow, PyTorch, Hugging Face Transformers). Key Responsibilities: 1. Model Development & Training: Design, train, and fine-tune large language models (LLMs) for tasks such as natural language understanding, generation, and classification. Implement and optimize machine learning algorithms using frameworks like TensorFlow, PyTorch, or Hugging Face. 2. Prompt Engineering: Craft high-quality prompts to maximize LLM performance for specific use cases, including chatbots, text summarization, and question-answering systems. Experiment with prompt tuning and few-shot learning techniques to improve model accuracy and efficiency. 3. End-to-End System Design: Architect scalable, secure, and fault-tolerant AI/ML systems, integrating LLMs with backend services and APIs. Develop microservices-based architectures using Java/Node.js/.NET for seamless integration with enterprise applications. Design and implement data pipelines for preprocessing, feature engineering, and model inference. 4. Integration & Deployment: Deploy ML models and LLMs to production environments using containerization (Docker, Kubernetes) and cloud platforms (AWS/Azure/GCP). Build RESTful or GraphQL APIs to expose AI capabilities to front-end or third-party applications. 5. Performance Optimization: Optimize LLMs for latency, throughput, and resource efficiency using techniques like quantization, pruning, and model distillation. Monitor and improve system performance through logging, metrics, and A/B testing. 6. Collaboration & Leadership: Work closely with data scientists, software engineers, and product managers to align AI solutions with business objectives. Mentor junior engineers and contribute to best practices for AI/ML development. What will excite us: Strong understanding of LLM architectures and prompt engineering techniques. Experience with backend development using Java/Node.js (Express)/.NET Core. Familiarity with cloud platforms (AWS, Azure, GCP) and DevOps tools (Docker, Kubernetes, CI/CD). Knowledge of database systems (SQL, NoSQL) and data pipeline tools (Apache Kafka, Airflow). Strong problem-solving and analytical skills. Excellent communication and teamwork abilities. Ability to work in a fast-paced, collaborative environment. What will excite you: Lead AI innovation in a fast-growing, technology-driven organization. Work on cutting-edge AI solutions, including LLMs, autonomous AI agents, and Generative AI applications. Engage with top-tier enterprise clients and drive AI transformation at scale. Location: Ahmedabad
Posted 6 days ago
5.0 - 10.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Very Urgent position for Data Engineer Lead Job Title: Data Engineer Lead Experience: 5-10 Years Budget :10-14LPA(based on experience) Location: Pune -hybrid Notice Period: Immediate to 15days Mandatory skills: Python,GCP,Spark,SQL(EXPERT) Tech Stack Table Skills Experience Rating out of 10 Python GCP Spark SQL-Expert Lead Responsibilities Lead and mentor a team of data engineers, providing technical guidance, setting best practices, and overseeing task execution for the migration project. Design, develop, and architect scalable ETL processes to extract, transform, and load petabytes of data from on-premises SQL Server to GCP Cloud SQL PostgreSQL. Oversee the comprehensive analysis of existing SQL Server schemas, data types, stored procedures, and complex data models, defining strategies for their optimal conversion and refactoring for PostgreSQL. Establish and enforce rigorous data validation, quality, and integrity frameworks throughout the migration lifecycle, ensuring accuracy and consistency. Collaborate strategically with Database Administrators, application architects, business stakeholders, and security teams to define migration scope, requirements, and cutover plans. Lead the development and maintenance of advanced scripts (primarily Python) for automating large-scale migration tasks, complex data transformations, and reconciliation processes. Proactively identify, troubleshoot, and lead the resolution of complex data discrepancies, performance bottlenecks, and technical challenges during migration. Define and maintain comprehensive documentation standards for migration strategies, data mapping, transformation rules, and post-migration validation procedures. Ensure data governance, security, and compliance standards are meticulously applied throughout the migration process, including data encryption and access controls within GCP. Implement Schema conversion or custom schema mapping strategy for SQL Server to PostgreSQL shift Refactor and translate complex stored procedures and T-SQL logic to PostgreSQL-compatible constructs while preserving functional equivalence. Develop and execute comprehensive data reconciliation strategies to ensure consistency and parity between legacy and migrated datasets post-cutover. Design fallback procedures and lead post-migration verification and support to ensure business continuity. Ensuring metadata cataloging and data lineage tracking using GCP-native or integrated tools. Must-Have Skills Expertise in data engineering, specifically for Google Cloud Platform (GCP). Deep understanding of r elational database architecture, advanced schema design, data modeling, and performance tuning. Expert-level SQL proficiency, with extensive hands-on experience in both T-SQL (SQL Server) and PostgreSQL. Hands-on experience with data migration processes, including moving datasets from on-premises databases to cloud storage solutions. Proficiency in designing, implementing, and optimizing complex ETL/ELT pipelines for high-volume data movement, leveraging tools and custom scripting. Strong knowledge of GCP services: Cloud SQL, Dataflow, Pub/Sub, Cloud Storage, Dataproc, Cloud Composer, Cloud Functions, and Bigquery. Solid understanding of data governance, security, and compliance practices in the cloud, including the management of sensitive data during migration. Strong programming skills in Python or Java for building data pipelines and automating processes. Experience with real-time data processing using Pub/Sub, Dataflow, or similar GCP services. Experience with CI/CD practices and tools like Jenkins, GitLab, or Cloud Build for automating the data engineering pipeline. Knowledge of data modeling and best practices for structuring cloud data storage for optimal query performance and analytics in GCP. Familiarity with observability and monitoring tools in GCP (e.g., Stackdriver, Prometheus) for real-time data pipeline visibility and alerting. Good-to-Have Skills Direct experience with GCP Database Migration Service, Storage Transfer Service, or similar cloud-native migration tools. Familiarity with data orchestration using tools like Cloud Composer (based on Apache Airflow) for managing workflows. Experience with containerization tools like Docker and Kubernetes for deploying data pipelines in a scalable manner. Exposure to DataOps tools and methodologies for managing data workflows. Experience with machine learning platforms like AI Platform in GCP to integrate with data pipelines. Familiarity with data lake architecture and the integration of BigQuery with Google Cloud Storage or Dataproc. Kindly share a profile only in this tracker format ,attach the tracker to the body of the mail. Without this tracker format profile will not be considered. Sl. No Date Position Names of the Candidate Mobile Number Email id Total Experience Relevant Experience CUrrent CTC Expected CTC Notice Period / On Paper Current Organisation Current Location Address with Pin code Reason of leaving DOB Offer in hand VENDOR NAME Comments
Posted 6 days ago
5.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Job description Sr. Python Developer Quick Apply https://goodspace.ai/jobs/Sr.-Python-Developer?id=28338&source=campaign_Linkedin-Kritika_SrPythonDeveloper-28338 Prometteur Solutions Pvt Ltd Years of Experience- 5-10 Years LocationPune, Maharashtra, IndiaKey Skills: Elastic SearchS3Aws LambdaAws Api GatewayAWS amplifydynamoDBECSShieldSecrets Manager Job Description Key Responsibilities : ● Develop, maintain, and optimize server-side applications using Python and Django. ● Design and implement RESTful APIs to support front-end functionalities. ● Work with cloud platforms, specifically AWS, to manage and deploy applications. ● Collaborate with front-end developers to integrate user-facing elements with server-side logic. ● Optimize applications for maximum speed and scalability. ● Develop and maintain databases and data storage solutions. ● Troubleshoot and debug applications to ensure high-quality and performance standards. ● Implement security and data protection measures. ● Participate in code reviews and contribute to continuous improvement initiatives. ● Handle both synchronous and asynchronous programming tasks to improve application performance and responsiveness. Required Skills and Experience : ● 5-7 years of experience in backend development using Python and Django. ● Hands-on experience with AWS services, including EC2, S3, RDS, Lambda, and more. ● Strong understanding of web technologies such as HTTP, REST, and JSON. ● Experience with relational databases like PostgreSQL or MySQL and familiarity with ORM (Object Relational Mapping). ● Proficiency in designing and developing RESTful APIs. ● Familiarity with version control systems like Git. ● Experience with Continuous Integration/Continuous Deployment (CI/CD) tools and pipelines. ● Knowledge of best practices for software development, including code reviews, testing, and documentation. ● Strong problem-solving skills and ability to work independently and in a team environment. ● Good knowledge of Celery for managing asynchronous tasks and background jobs. ● Experience with Redis for caching and message brokering. ● Understanding of synchronous and asynchronous programming paradigms. Preferred Qualifications : ● Experience with containerization and orchestration tools like Docker and Kubernetes. ● Familiarity with microservices architecture and serverless computing Knowledge of other backend frameworks or languages such as Flask, FastAPI,Django ● Good to have : Understanding of front-end technologies (e.g., JavaScript, HTML, CSS) for better collaboration with front-end teams. ● Experience with Agile/Scrum methodologies. Technical Skills : 1. Dynamodb 2. API Gateway 3. Secrets Manager 4. s3 5. Lambda 6. Shield 7. ECS 8. Amplify 9. Cloudfront 10. RDS 11. OpenSearch (Elastic Search) Experience: 5+ years Quick Apply: https://goodspace.ai/jobs/Sr.-Python-Developer?id=28338&source=campaign_Linkedin-Kritika_SrPythonDeveloper-28338 Job Type: Full-time Pay: ₹1,200,000.00 - ₹1,800,000.00 per year Location Type: In-person Schedule: Day shift Ability to commute/relocate: Pune, Maharashtra: Reliably commute or planning to relocate before starting work (Preferred)
Posted 6 days ago
7.0 - 9.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Job Overview: We are looking for a highly skilled and experienced Lead UI Developer to join our growing engineering team. This is an opportunity for a dynamic individual who is proficient in building scalable, high-performance UI systems using Vue 3 or React with TypeScript. In this role, you will be responsible for leading the design and development of complex UI applications, integrating with APIs, managing legacy systems, and adopting Micro Frontend architectures. Alongside your technical contributions, you will also mentor junior developers and help establish best practices within the team. If you thrive in a fast-paced, product-driven environment and are ready to take on challenging projects while shaping the future of our team, we would love to have you on board. Responsibilities Lead the design, development, and maintenance of UI applications using Vue 3, React, and TypeScript. Architect and implement Micro Frontend solutions to break down large UI systems into smaller, manageable, and independently deployable parts. Ensure smooth integration with backend services and APIs to provide seamless user experiences. Lead the development of unit tests with code coverage to ensure high-quality and reliable software. Maintain and enhance legacy systems built with jQuery and other JavaScript frameworks, ensuring smooth transitions to modern architectures. Troubleshoot, debug, and optimize front-end performance to improve speed, reliability, and scalability. Mentor and guide junior developers, providing technical leadership and fostering a collaborative team environment. Review and enforce best practices in coding, design, and architecture to maintain code quality across the team. Actively contribute to sprint planning, backlog grooming, and ensuring timely delivery in an Agile environment (familiarity with Jira is a must). Work closely with cross-functional teams to ensure the UI meets business requirements and integrates seamlessly with backend systems. Stay up to date with the latest industry trends and technologies to continuously improve development practices. Be a technical leader who takes ownership of complex features, ensuring the highest standards of quality. Required Skills And Qualifications 7-9 years of experience in frontend development with strong expertise in Vue 3, React, and TypeScript. Strong experience in coding and system design, with a solid understanding of architectural principles for building scalable, maintainable systems. Expertise in Micro Frontend architecture and the ability to lead the adoption and implementation of such systems. Experience with legacy systems, specifically with jQuery and other JavaScript frameworks, and the ability to modernize and refactor code. Proficiency in consuming and integrating APIs effectively. Expertise in unit testing frameworks and tools (e.g., Jest, Mocha), and ensuring code coverage for high-quality software. Strong experience with Docker for containerization and deployment. Proficiency in version control tools like GitHub or Bitbucket. Excellent debugging skills and a methodical approach to problem-solving. Strong experience with Agile/Scrum methodologies and familiarity with Jira for project management and issue tracking. Ability to lead a team, mentor junior developers, and foster a collaborative environment. Strong communication and problem-solving skills, with the ability to take on complex challenges and devise innovative solutions. Self-driven with a passion for technology and a commitment to continuous learning. Desired Skills Familiarity with CI/CD pipelines and tools for automating deployment and testing. Knowledge of backend technologies and willingness to collaborate with backend teams to optimize API integrations. Experience with cloud services (AWS, Azure, GCP) and distributed systems. Knowledge of performance optimization techniques and best practices for building highly performant web applications. At Clarivate, we are committed to providing equal employment opportunities for all qualified persons with respect to hiring, compensation, promotion, training, and other terms, conditions, and privileges of employment. We comply with applicable laws and regulations governing non-discrimination in all locations.
Posted 6 days ago
2.5 - 3.5 years
0 Lacs
Jaipur, Rajasthan, India
On-site
Job Title: Python AWS Developer Experience: 2.5 to 3.5 Years Location: Jaipur (Onsite) Interview Mode: Face-to-Face (F2F) Job Description: We are looking for a talented and enthusiastic Python AWS Developer with 2.5 to 3.5 years of experience to join our team in Jaipur (onsite) . The ideal candidate should be well-versed in backend development using Python and have practical experience with various AWS services. Key Responsibilities: Develop and maintain scalable backend systems and microservices using Python. Design and deploy cloud-native applications on AWS. Work with AWS services such as Lambda, EC2, S3, API Gateway, and RDS. Build and manage APIs and integrations with third-party systems. Optimize application performance and troubleshoot issues. Collaborate with cross-functional teams in Agile environments. Required Skills: 2.5 to 3.5 years of hands-on experience in Python development. Strong experience with AWS cloud services (e.g., Lambda, S3, EC2, RDS, CloudWatch). Experience building RESTful APIs and working with frameworks like Flask or Django. Solid understanding of relational databases (e.g., PostgreSQL, MySQL). Familiarity with CI/CD pipelines and version control tools like Git. Nice to Have: Knowledge of containerization tools (Docker, ECS, or Kubernetes). Experience with serverless architecture. Exposure to infrastructure-as-code tools like CloudFormation or Terraform. Additional Information: Location: Jaipur (Work from office only) Interview Mode: Face-to-Face (F2F) Notice Period: Immediate joiners preferred / up to 30 days
Posted 6 days ago
0 years
0 Lacs
Ghaziabad, Uttar Pradesh, India
On-site
Our Context To support its rapid growth and continuously improve the quality of its software and processes, NeoXam is seeking a DevOps Engineer (M/F) to join its R&D team. As part of a cross-functional R&D team of around ten people, you will be responsible for implementing software production and quality assurance methods and tools. Within the R&D department, the DevOps Build team provides development teams with tools and methods on a 100% virtualized infrastructure to build, test, deliver, and manage the lifecycle of NeoXam products in a standardized, automated way. This automated system, known as the Software Factory, uses tools such as BitBucket, Bamboo, Sonar, VeraCode, and Artifactory. RESPONSIBILITIES Your main responsibilities will include: Setting up a replica of our Software Factory to test tool updates. Managing version upgrades for the Software Factory tools. Proposing improvements to the development, acceptance, and delivery processes, given your holistic view of the production chain. Training R&D teams on new tools and processes you introduce. Assisting in setting up deployment tools for our solutions on the Cloud (AWS, Azure, etc.). PROFILE Proficiency in configuration management languages and tools such as Ansible, SaltStack, and Terraform. Prior experience in setting up and managing a continuous deployment pipeline (e.g., Atlassian Bamboo, AWS CodePipeline, GitLab CI). Knowledgeable about cloud computing principles and experienced with key AWS services. Initial experience with Docker or Kubernetes Proficient in English. Bonus: Proficiency in Python Familiarity with CI/CD languages and tools like SVN/Git, Ant, Maven, BitBucket, Bamboo, Jira, Sonar, Fisheye, Crucible, or similar alternatives (TFS, Hudson/Jenkins, Gradle). Familiar with Agile methodologies, especially Scrum. Experience with Atlassian products (Jira, BitBucket, Bamboo, etc.). At NeoXam, we value curiosity, commitment, and autonomy. We seek individuals who are solution-oriented and see continuous learning as essential to their development. The role requires rigor, perseverance, and the ability to communicate effectively and promote best practices. Main responsibilities of the Cloud DevOps Engineer: Streamline, standardize, and improve the cloud offering Work closely with internal departments (software R&D teams, Cloud R&D teams, Products, As a Service team) to establish standards and best practices for design and development. Evaluate and recommend tools and technologies to improve development and deployment processes: managed services, containerization (Docker), orchestration (Kubernetes), and consideration of scalability/flexibility mechanisms (EKS, AKS, Lambda, EventBridge, etc.). Collaborate with technical teams to ensure compliance with security standards Support pre-sales teams with their demos and POCs Participate in and support project teams (consulting, clients) on the Cloud. Ensure technological monitoring and experiment with new services offered by Cloud providers (managed services, Machine Learning, AI, etc.)
Posted 6 days ago
3.0 years
0 Lacs
Jaipur, Rajasthan, India
On-site
For over four decades, PAR Technology Corporation (NYSE: PAR) has been a leader in restaurant technology, empowering brands worldwide to create lasting connections with their guests. Our innovative solutions and commitment to excellence provide comprehensive software and hardware that enable seamless experiences and drive growth for over 100,000 restaurants in more than 110 countries. Embracing our "Better Together" ethos, we offer Unified Customer Experience solutions, combining point-of-sale, digital ordering, loyalty and back-office software solutions as well as industry-leading hardware and drive-thru offerings. To learn more, visit partech.com or connect with us on LinkedIn, X (formerly Twitter), Facebook, and Instagram. Position Description PAR Technology is seeking a technically skilled Sustaining Engineer to maintain and troubleshoot our PAR POS and integrated payment solutions for the hospitality industry. This role focuses on resolving complex technical issues, improving system reliability, and supporting our production environments through hands-on problem-solving. The ideal candidate is a detail-oriented engineer with strong diagnostic skills and practical experience maintaining critical payment systems and POS infrastructure. Position Location: Jaipur Reports To: Manager What We’re Looking For Diagnose and resolve production issues across POS, payment processing, and integrated systems Develop and maintain scripts (PowerShell/Python) to automate troubleshooting and monitoring tasks Analyze system logs, application dumps, and SQL query performance to identify root causes Troubleshoot API integrations (REST/SOAP) and middleware message queues Support PCI-compliant payment systems including EMV/NFC transaction flows Debug POS peripherals (printers, scanners, cash drawers) using manufacturer tools Collaborate with engineering teams to implement permanent fixes for recurring issues Maintain and enhance system monitoring using New Relic/Datadog Document technical solutions and create knowledge base articles Participate in on-call rotation for critical production support Unleash your potential: What you will be doing and owning: 3+ years hands-on experience maintaining production POS, payment, or distributed systems Proficient in Windows/Linux system administration and log analysis Strong SQL skills including query optimization and deadlock troubleshooting Scripting proficiency in PowerShell and/or Python for automation Payment systems expertise (EMV/NFC, PCI DSS compliance requirements) API troubleshooting with both REST (Postman, curl) and SOAP (SoapUI, Wireshark) Networking knowledge (TCP/IP, Wi-Fi) and packet analysis (Wireshark/tcpdump) Experience with observability platforms (New Relic/Datadog) Understanding of cryptographic processes and TLS/SSL management Ability to analyze .NET/Java application dumps and system event logs POS integration layers and middleware experience Preferred Qualifications Certifications: Microsoft SQL Server, Network+, CCNA, or PCI Professional (PCIP) Experience with containerization (Docker, Kubernetes) Cloud platform knowledge (Azure, AWS) Familiarity with POS software (NCR Aloha, Micros, or similar) Hospitality/restaurant technology background Interview Process Interview #1: Phone Screen with Talent Acquisition Team Interview #2: Video interview with the Technical Teams (via MS Teams/F2F) Interview #3: Video interview with the Hiring Manager (via MS Teams/F2F) PAR is proud to provide equal employment opportunities (EEO) to all employees and applicants for employment without regard to race, color, religion, sex, national origin, age, disability or genetics. We also provide reasonable accommodations to individuals with disabilities in accordance with applicable laws. If you require reasonable accommodation to complete a job application, pre-employment testing, a job interview or to otherwise participate in the hiring process, or for your role at PAR, please contact accommodations@partech.com. If you’d like more information about your EEO rights as an applicant, please visit the US Department of Labor's website.
Posted 6 days ago
10.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Exp: 7+ Notice Period: Immediate to 15days Location: Chennai Shift :2pm to 11pm Skills:React/React Native .NET/C# ,Azure or AWS services,CI/CD processes,system architecture or system design . FULL STACK TECH LEAD: Responsibilities: Technical Leadership: Lead a team of full stack developers in designing, developing, and implementing high-quality software solutions. Provide technical guidance and mentorship to team members. Collaborate with cross-functional teams to ensure alignment with business objectives. Full Stack Development: Hands-on development using a variety of technologies, including jQuery, Angular, React/React Native (Mandatory), Vue.js, Node.js, and .NET/C# (Mandatory) Design and implement scalable and maintainable code. Ensure the integration of front-end and back-end components for seamless functionality. Project Management: Oversee project timelines, ensuring milestones are met and projects are delivered on time. Work closely with project managers to define project scope and requirements. Code Review and Quality Assurance: Conduct code reviews to maintain code quality and ensure adherence to best practices. Implement and enforce coding standards and development processes. Communication: Prioritize effective communication within the team and with stakeholders. Act as a liaison between technical and non-technical teams to ensure understanding and alignment of project goals. Qualifications: Bachelor’s degree in computer science or a related field. 10-15 years of relevant professional and hands-on software development experience. Proven experience in full stack development with expertise in jQuery, Angular, React/React Native (Mandatory), Vue.js, Node.js, and .NET/C# (Mandatory). Strong understanding of software architecture and design principles. i.e 2 Tier/3 Tier and various system architecture and design principles. Proven experience in database technologies such as SQL Server, MySQL, or MongoDB and No SQL technologies Hands-on experience in CI/CD pipeline and various deployment tools like GitHub, Maven, Jenkins…etc Excellent communication and interpersonal skills. Experience in leading and mentoring development teams. Additional Skills (Preferred): Familiarity with cloud platforms such as AWS, Azure (Preferred), or Google Cloud. Knowledge of containerization and orchestration tools (Docker, Kubernetes).
Posted 6 days ago
5.0 years
0 Lacs
Gurugram, Haryana, India
On-site
About AiSensy AiSensy is a WhatsApp based Marketing & Engagement platform helping businesses like Adani, Delhi Transport Corporation, Yakult, Godrej, Aditya Birla Hindalco, Wipro, Asian Paints, India Today Group, Skullcandy, Vivo, Physicswallah, and Cosco grow their revenues via WhatsApp. Enabling 100,000+ Businesses with WhatsApp Engagement & Marketing 400 Crores+ WhatsApp Messages exchanged between Businesses and Users via AiSensy per year Working with top brands like Delhi Transport Corporation, Vivo, Physicswallah & more High Impact as Businesses drive 25-80% Revenues using AiSensy Platform Mission-Driven and Growth Stage Startup backed by Marsshot.vc, Bluelotus.vc & 50+ Angel Investors Key Responsibilities: Build Scalable Architecture to handle millions of messages per day. Lead a team of engineers and guide them on a day to day basis. Collaborate with product managers, designers, and other engineers to define and implement new features and enhancements. Write clean, maintainable, and efficient code following best practices and coding standards. Conduct code reviews to ensure code quality and adherence to standards. Identify and resolve performance bottlenecks, bugs, and other issues. Mentor and provide guidance to junior engineers, fostering a culture of continuous learning and improvement. Participate in architectural discussions and contribute to the technical direction of the team. Stay up-to-date with the latest industry trends and technologies, and incorporate them into our development processes. Ensure the security, reliability, and performance of our applications. Collaborate with DevOps to streamline the deployment and monitoring processes. Qualifications: Bachelor’s degree in Computer Science, Engineering, or a related field. Master’s degree preferred. Over all 5+ years of experience with proven experience 3+ years as a Software Engineer, with a strong track record of delivering high-quality software products. Expertise in the MERN stack (MongoDB, Express.js, React, Node.js). Strong understanding of JavaScript and modern JavaScript frameworks. Experience with front-end technologies like HTML5, CSS3, and responsive design. Strong understanding of software development methodologies, including Agile and Scrum. Experience with RESTful API design and development. Experience with database design and management, particularly MongoDB. Familiarity with cloud platforms such as AWS. Excellent problem-solving and analytical skills. Strong communication and collaboration abilities. Experience with version control systems such as Git. Knowledge of containerization and orchestration tools like Docker and Kubernetes
Posted 6 days ago
0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Technical Skills – Must have: · Strong application development experience in Python · Knowledge of AI, NLP, ML, Chat bots will be added advantage. · Experience in Web services using REST, SOAP · Knowledge on Linux / Unix flavors · Experience with cloud platforms (AWS, GCP, Azure) and containerization (Docker, Kubernetes). · Knowledge of different authentication and authorization techniques Responsibilities: · Proven work experience as a Technical Lead or similar role · Experience in managing/working with projects/products across the lifecycle (Design to rollout) · Basic understanding of frontend development · Assessing the business requirements architecture and working with technical staff to implement/recommend solutions. · Experience and responsibility include defining, prototyping and recommending the technology solutions, detailing the implementation design and identifying interfaces with other products for integration. · Experience in customer & employee-facing business applications
Posted 6 days ago
0.0 - 6.0 years
7 - 15 Lacs
Noida Sector 62, Noida, Uttar Pradesh
On-site
Introduction: EkVayu Tech is a fast growing, research focused, technology company specializing in developing IT and AI applications. Our projects span modern front-end development, robust backend systems, cloud-native and on-prem infrastructure, AI/ML enablement, and automated testing pipelines. We are looking for a visionary technical leader to guide our engineering team and architecture strategy as we scale. We are having products in the area of Cybersecurity/ AI/ML/DL, Signal Processing, System Engineering and Health-Tech. Job Title: Tech Architect // Tech Lead, Experience Level: 6-8 years Location: Noida Sector 62, UP, India Role Overview As a Tech Architect / Tech Lead, you will be responsible for driving the overall engineering strategy, leading architecture and design decisions, managing development teams, and ensuring scalable, high-performance delivery of products. You’ll work closely with founders, product teams, and clients to define and deliver cutting-edge solutions that leverage AI and full-stack technologies. Key Responsibilities Architectural Leadership: o Design, develop and evolve scalable, secure, and performant architecture across front-end, backend, and AI services. o Guide tech stack choices, frameworks, and tools aligned with business goals. o Lead cloud/on-prem infrastructure decisions, including CI/CD, containerization, and DevOps automation for Saas/On prem products Hands-on Development & Technical Oversight: o Contribute to critical system components and set examples in code quality and documentation. o Oversee implementation of RESTful APIs, microservices, AI modules, and integration plugins. o Champion test-driven development and automated QA processes. AI Enablement: o Guide development of AI-enabled features, data pipelines, and model integration (working with MLOps/data teams). o Drive adoption of tools that enhance AI-assisted development and intelligent systems. Infrastructure & Deployment: o Architect hybrid environments across cloud and on-prem setups. o Optimize deployment pipelines using tools like Docker, Kubernetes, GitHub Actions, or similar. o Implement observability solutions for performance monitoring and issue resolution. Engineering Management: o Build and mentor a high-performing engineering team. o Define engineering best practices, coding standards, and technical workflows. o Own technical delivery timelines and code quality benchmarks. Required Skills & Experience 6-8 years of experience in software engineering, with 3+ years in a leadership/architect role. Strong proficiency in: o Frontend: React.js, Next.js o Backend: Python, Django, FastAPI o AI/ML Integration: Working knowledge of ML model serving, APIs, or pipelines Experience building and scaling systems in hybrid (cloud/on-prem) environments. Hands-on with CI/CD, testing automation, and modern DevOps workflows. Experience with plugin-based architectures and extensible systems. Deep understanding of security, scalability, and performance optimization. Ability to translate business needs into tech solutions and communicate across stakeholders. Preferred (Nice to Have) Experience with OpenAI API, LangChain, or custom AI tooling environments. Familiarity with infrastructure-as-code (Terraform, Ansible). Background in SaaS product development or AI-enabled platforms. Knowledge of container orchestration (Kubernetes) and microservice deployments. What We Offer Competitive compensation Opportunity to shape core technology in a fast-growing company Exposure to cutting-edge AI applications and infrastructure challenges Collaborative and open-minded team culture How to Apply Send your resume, portfolio (if applicable), and a brief note on why you’re excited to join us to HR@EkVayu.com Job Types: Full-time, Permanent Pay: ₹700,000.00 - ₹1,500,000.00 per year Schedule: Day shift Ability to commute/relocate: Noida Sector 62, Noida, Uttar Pradesh: Reliably commute or planning to relocate before starting work (Required) Education: Bachelor's (Required) Experience: Full-stack development: 6 years (Required) Work Location: In person
Posted 6 days ago
7.0 - 10.0 years
0 Lacs
India
On-site
Summary The Security Engineer will collaborate with teams to design fine-grained technical controls around access to sensitive datasets and ensure their compliance with the regulatory requirements while providing strong technical support, enabling technical decisions to be made with confidence and at pace. This role generally works a standard business week, but occasional weekend work / out of hours may be required. Responsibilities (Other duties may be assigned.) • Carry out Host-Based Vulnerability Assessments • Drive the remediation of identified vulnerabilities across multiple departments. • Responsible for monitoring and responding to alerts • Support incident response in case of a security breach • Support and continuously enhance our existing security services for Cloud • Collaborate across multiple teams to remove blockers to drive forward implementation • Work with internal and external support teams Education and/or Experience • College degree or equivalent work experience • 7-10 years in Information Technology, with a comprehensive Security Operational background • Experience with security frameworks (NIST, MITRE, CIS, ISO & SOC2) • Hands-on experience with the configuration of cloud technologies Knowledge, Skills and Abilities • Extensive knowledge of cloud operations and automation • Extensive knowledge of key security practices in access control, application security, network security, and security strategy • Knowledge of regulatory data requirements in the cloud, aligned with HIPAA, GDPR or other compliance standards • Basic understanding of containerization technology • Have a strong knowledge of privileged access management, vulnerability management, and threat intelligence • Build multilayer working relationships with both technical and business stakeholders, gaining their respect and trust based on your knowledge and professionalism • Strong root cause analysis, problem-solving, and analytical skills • Excellent communication and analytical skills for incident and problem management as well as stakeholder management • Ability to effectively document operational process and procedures • Adaptable to a fast-paced work environment and an ability to prioritize projects Technical Skills • Basic understanding of DevOps tools (GitHub, Jenkins, Nexus, Ansible, etc.) • Basic understanding of SecOps tools (Rapid 7, Cloud Conformity, SUMO Logic, etc.) • Excellent understanding of Amazon Web Services (AWS) the AWS shared responsibility model, KMS, CloudTrail, CloudWatch, CloudFormation ,Lambda and GuardDuty • Extensive knowledge of SIEM and SOAR tooling • Proficient in Linux, Node.js, YAML and JSON Language Skills • Ability to speak the English language proficiently, both verbally and in writing. Work Environment The work environment characteristics described here are representative of those an employee encounters while performing the essential functions of this job. Reasonable accommodations may be made to enable individuals with disabilities to perform the essential functions. • Employee works primarily in a home office environment. • The home office must be a well-defined work area, separate from normal domestic activity and complete with all essential technology including, but not limited to; separate phone, scanner, printer, computer, etc. as required in order to effectively perform their duties. Work Requirements • Compliance with all relevant FINEOS Global policies and procedures related to Quality, Security, Safety, Business Continuity, and Environmental systems. • Travel and fieldwork, including international travel may be required. Therefore, employee must possess, or be able to acquire a valid passport. • Must be legally eligible to work in the country in which you are hired. FINEOS is an Equal Opportunity Employer. FINEOS does not discriminate on the basis of race, religion, color, sex, gender identity, sexual orientation, age, non-disqualifying physical or mental disability, national origin, veteran status or any other basis covered by appropriate law. All employment is decided on the basis of qualifications, merit, and business need.
Posted 6 days ago
2.0 years
0 Lacs
India
On-site
Job Description: We are seeking an experienced Linux engineer to design, develop, and optimize Linux OS to run containers in custom hardware. The ideal candidate should have expertise in Linux internals, networking, hardware integration, shell scripting, Python, Go, Systemd, Kubernetes, and container technologies. You will play a key role in building Linux OS customization for different variety of hardware and product requirements. Experience in working with x64 and ARM based hardware and Raid concepts. Enhancing system performance, security, and stability while ensuring smooth container orchestration. Should have experience in working with AWS Cloud based EC2 instance. Familiarity with Fedora COREOS and writing custom ignition scripts will be added advantage. Responsibilities: - Develop and optimize a container-based OS with a focus on performance, security, and scalability. - Work extensively with Linux kernel, system internals and device drivers to enhance OS capabilities. - Develop and maintain Systemd services for managing system processes and dependencies. - Write and optimize "shell scripts, Python, Go and Make" for code and build automation, system monitoring, and process control. - Automate OS installation, Network configuration, Iptables rule and app installation. - Troubleshoot and resolve low-level system, hardware, and networking issues. - Ensure system security by implementing best practices and performing vulnerability assessments. - Collaborate with team to streamline CI/CD pipelines and container orchestration. - Work with Kubernetes and container runtime environments to manage and optimize OS performance. - Document system design, technical processes, and best practices. - Test OS with range of hardware before releasing. - Working knowledge with AWS Cloud EC2. Requirements: - 2+ years of experience in Linux OS development. - Strong understanding of Linux internals, file systems, and process management. - Experience with containerization technologies (Docker, Podman, LXC, or similar). - Proficiency in system-level programming using Python, Go, and Shell scripting. - In-depth knowledge of systemd and its components. - Solid grasp of network protocols (TCP/IP, DNS, DHCP, VPN, firewall configuration, etc.). - Familiarity with hardware integration, device drivers, and bootloaders. - Experience with security hardening Iptables, SELinux, and access control policies. - Strong problem-solving skills and the ability to debug low-level system issues. - Familiarity with CI/CD pipelines, automation tools, and infrastructure-as-code. - Hands-on experience with Kubernetes, container orchestration tools and AWS Cloud.
Posted 1 week ago
12.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
About the Role: We are seeking a highly experienced and technically strong Head of Engineering to lead and scale our engineering function. The ideal candidate is a hands-on technology leader from a Tier 1 institute (IITs/NITs/BITS/IIITs) with deep expertise in Java , Python , and modern cloud-native architectures. You will be responsible for setting the technical vision, driving engineering excellence, and building a high-performing engineering team to deliver scalable and robust solutions. Key Responsibilities: Lead the entire engineering organization across backend, frontend, DevOps, and QA functions. Drive system architecture and technical strategy aligned with business goals. Architect and guide development of scalable applications using Java and Python . Set and enforce engineering best practices (code quality, CI/CD, test coverage, documentation). Collaborate closely with Product, Design, and Data Science teams for feature development and delivery. Lead recruitment, mentoring, and performance management of engineering teams. Ensure availability, performance, security, and scalability of production systems. Establish and track key engineering KPIs to drive continuous improvement. Required Qualifications: B.Tech/M.Tech in Computer Science or related field from a Tier 1 institute (IIT/NIT/BITS/IIIT). 12+ years of experience in software development; 5+ years in engineering leadership roles. Proven expertise in Java and Python , with deep architectural understanding. Strong knowledge of system design, data structures, algorithms, and object-oriented programming. Hands-on experience with microservices architecture and RESTful APIs. Proficient in cloud platforms (AWS/GCP/Azure), containerization (Docker, Kubernetes), and CI/CD pipelines. Excellent leadership, communication, and stakeholder management skills. Ability to make high-quality technical decisions in a fast-paced environment. Nice to Have: Experience in scaling engineering orgs from early to growth stages. Exposure to frontend technologies (React/Angular), and/or data platforms (Kafka, Spark). Familiarity with agile methodologies and modern engineering tools (JIRA, GitHub, Jenkins, etc).
Posted 1 week ago
0.0 - 2.0 years
6 - 8 Lacs
Vadodara, Gujarat
On-site
Company Description Digiflux Technologies Private Limited, based in Vadodara, is a leading provider of comprehensive digital solutions. We specialize in developing Web and Mobile Apps, as well as delivering innovative IoT-based solutions tailored to various organizational needs. Role Description We are seeking a Backend Engineer with 3 to 6 years of experience to join our team at Digiflux Technologies Private Limited in Vadodara. This is a full-time, on-site position. The successful candidate will be responsible for designing, developing, and maintaining high-performance backend APIs using Node.js. You will work closely with the frontend and DevOps teams to ensure seamless integration and optimal application performance, with a focus on handling large-scale traffic and ensuring fast response times. Key Responsibilities Develop and maintain scalable backend APIs using Node.js with frameworks like Express.js or NestJS. Ensure efficient API performance, focusing on low-latency and high-throughput handling. Optimize backend systems to handle large-scale traffic and data load. Work closely with frontend and DevOps teams to ensure seamless integration and application performance. Implement secure and maintainable code following industry best practices for backend development. Perform thorough Unit testing of APIs, ensuring functionality, reliability, and high performance. Monitor backend performance and troubleshoot issues to improve response times and system stability. Qualifications 3 to 6 years of experience in backend development, with hands-on experience in Node.js and frameworks like Express.js or NestJS. Good understanding of backend development tools and techniques for creating high-quality APIs with fast response times. Familiarity with database design, querying, and optimization (SQL or NoSQL databases). Experience in developing scalable backend systems capable of handling high traffic loads. Knowledge of API security best practices, authentication mechanisms, and data protection. Strong problem-solving skills and attention to detail. Bachelor’s degree in Computer Science or a related field, or equivalent work experience. Good to Have Knowledge of containerization technologies such as Docker. Basic understanding of microservices architecture and how it applies to scalable systems. Familiarity with cloud architecture planning and AWS services (e.g., EC2, RDS, Lambda, Batch functions) for better infrastructure planning. Experience working with distributed systems and improving application scalability. Additional Information The role is based in Vadodara and requires a full-time on-site presence. Competitive salary and benefits package commensurate with experience. Opportunities for professional growth and development within a dynamic and innovative company. Job Type: Full-time Pay: ₹600,000.00 - ₹800,000.00 per year Location Type: In-person Schedule: Day shift Ability to commute/relocate: Vadodara, Gujarat: Reliably commute or planning to relocate before starting work (Required) Application Question(s): Current CTC Expected CTC Notice Period Experience: total relevant: 2 years (Preferred) Work Location: In person
Posted 1 week ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
20312 Jobs | Dublin
Wipro
11977 Jobs | Bengaluru
EY
8165 Jobs | London
Accenture in India
6667 Jobs | Dublin 2
Uplers
6464 Jobs | Ahmedabad
Amazon
6352 Jobs | Seattle,WA
Oracle
5993 Jobs | Redwood City
IBM
5803 Jobs | Armonk
Capgemini
3897 Jobs | Paris,France
Tata Consultancy Services
3776 Jobs | Thane