Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
0.0 - 1.0 years
1 - 1 Lacs
Coimbatore
On-site
We’re looking for self-driven learners, not clock-watchers. If you believe in growth through ownership, are willing to be challenged, and care about your team as much as your task — we’ll give you the space to do your best work. Job Title: Software Developer Specialization: MATLAB, PYTHON Education: B.E., B.Tech., M.E., M.Tech. Experience: 0-1 year Experience as MATLAB/Python Developer or Programmer. Location : Gandhipuram, Coimbatore, TN, INDIA NOTE: CANDIDATES MUST BE READY TO ATTEND DIRECT OFFLINE INTERVIEW IMMEDIATELY. STRICTLY NO ONLINE INTERVIEW. NO TIME WASTERS. Requirements : B.E., B.Tech., M.E., M.Tech. Graduate with 0-1 year of working knowledge in MATLAB, or Python development. Freshers with adequate knowledge can also apply. Salary negotiable for experienced candidates. Should be familiar with different frameworks, notebooks and library functions of Python, MATLAB and Simulink. Java will be added advantage. Real-time Course Certifications must be added, if available. Strong communication skills and technical knowledge as a Data Science Engineer, Machine Learning Engineer, NLP or similar role. Knowledge of Image Processing, Data mining, Big Data, Deep learning, Machine Learning, Artificial intelligence, Network Technologies, Signal Processing, Communications, Power Electronics, etc., will be preferred. Should possess excellent problem-solving capability, effective time management, multitasking, self-starter and self-learner to learn new concepts. First 3 months will be Trainee period followed by two years service agreement with two months notice period. Responsibilities : Writing reusable, testable, and efficient MATLAB, JAVA and Python code for Academic Projects based on IEEE research papers. Should design and implement low-latency and high-availability applications using both MATLAB, and Python. Involved in R&D teams supporting Academic Projects Development and Documentation (Ph.D., MPhil, Engineering, UG/PG Projects). To work effectively in creating innovative and novel ideas for the projects in association with R&D team. Job Type: Full-time Pay: ₹10,000.00 - ₹15,000.00 per month Location Type: In-person Schedule: Day shift Fixed shift Ability to commute/relocate: Coimbatore - 641012, Tamil Nadu: Reliably commute or planning to relocate before starting work (Required) Application Question(s): Are you agreeing to the 2 years service agreement with the company? What is your expert Language? PYTHON OR JAVA OR MATLAB Education: Bachelor's (Required) Language: Tamil (Required) Work Location: In person
Posted 1 week ago
4.0 years
0 Lacs
Ahmedabad
Remote
At SmartBear, we believe building great software starts with quality—and we're helping our customers make that happen every day. Our solution hubs—SmartBear API Hub, SmartBear Insight Hub, and SmartBear Test Hub, featuring HaloAI, bring visibility and automation to software development, making it easier for teams to deliver high-quality software faster. SmartBear is trusted by over 16 million developers, testers, and software engineers at 32,000+ organizations – including innovators like Adobe, JetBlue, FedEx, and Microsoft. . Software Engineer – JAVA Zephyr Enterprise Solve challenging business problems and build highly scalable applications Design, document and implement new systems in Java 8/17 Build microservices, specifically with HTTP, REST, JSON and XML Product intro: Zephyr Enterprise is undergoing a transformation to better align our products to the end users' requirements while maintaining our market leading position and strong brand reputation across the Test Management Vertical. Go to our product page if you want to know more about Zephyr Test Management Products | SmartBear . You can even have a free trial to check it out About the role: As a Software Engineer, you will be integral part of this transformation and will be solving challenging business problems and build highly scalable and available applications that provide an excellent user experience. Reporting into the Lead Engineer you will be required to develop solutions using available tools and technologies and assist the engineering team in problem resolution by hands-on participation, effectively communicate status, issues, and risks in a precise and timely manner. You will write code per product requirements and create new products, create automated tests, contribute in system testing, follow agile mode of development. You will interact with both business and technical stakeholders to deliver high quality products and services that meet business requirements and expectations while applying the latest available tools and technology. Develop scalable real-time low-latency data egress/ingress solutions in an agile delivery method, create automated tests, contribute in system testing, follow agile mode of development. We are looking for someone who can design, document, and implement new systems, as well as enhancements and modifications to existing software with code that complies with design specifications and meets security and Java best practices. Have 4-7 years of experience with hands on experience working in Java 17 platform or higher and hold a Bachelor's Degree in Computer Science, Computer Engineering or related technical field required. API - driven development - Experience working with remote data via SOAP, REST and JSON and in delivering high value projects in Agile (SCRUM) methodology using preferably JIRA tool. Good Understanding of OOAD, Spring Framework and the Microservices based architecture Experience with Applications Performance Tuning, Scaling, Security, Resiliency Best Practices Experience with Relational or NoSQL database, level design and core Java patterns Experience with AWS stack, RDS, S3, Elastic cache, SSDLC, Agile methodologies and development experience in a SCRUM environment. Experience with Messaging Queue preferably RabbitMQ, ActiveMQ/Artemis. Experience with Atlassian suite of Products and the related ecosystem of Plugins Experience with React, JavaScript good to have. Why you should join the SmartBear crew: You can grow your career at every level. We invest in your success as well as the spaces where our teams come together to work, collaborate, and have fun. We love celebrating our SmartBears; we even encourage our crew to take their birthdays off. We are guided by a People and Culture organization - an important distinction for us. We think about our team holistically – the whole person. We celebrate our differences in experiences, viewpoints, and identities because we know it leads to better outcomes. Did you know: Our main goal at SmartBear is to make our technology-driven world a better place. SmartBear is committed to ethical corporate practices and social responsibility, promoting good in all the communities we serve. SmartBear is headquartered in Somerville, MA with offices across the world including Galway Ireland, Bath, UK, Wroclaw, Poland and Bangalore, India. We've won major industry (product and company) awards including B2B Innovators Award, Content Marketing Association, IntellyX Digital Innovator and Built-in Best Places to Work. SmartBear is an equal employment opportunity employer and encourages success based on our individual merits and abilities without regard to race, color, religion, gender, national origin, ancestry, mental or physical disability, marital status, military or veteran status, citizenship status, age, sexual orientation, gender identity or expression, genetic information, medical condition, sex, sex stereotyping, pregnancy (which includes pregnancy, childbirth, and medical conditions related to pregnancy, childbirth, or breastfeeding), or any other legally protected status.
Posted 1 week ago
25.0 years
5 - 9 Lacs
Noida
On-site
NVIDIA has been transforming computer graphics, PC gaming, and accelerated computing for more than 25 years. It’s a unique legacy of innovation that’s fueled by great technology—and amazing people. Today, we’re tapping into the unlimited potential of AI to define the next era of computing. An era in which our GPU acts as the brains of computers, robots, and self-driving cars that can understand the world. Doing what’s never been done before takes vision, innovation, and the world’s best talent. As an NVIDIAN, you’ll be immersed in a diverse, supportive environment where everyone is inspired to do their best work. Come join the team and see how you can make a lasting impact on the world. NVIDIA is looking for a passionate member to join our DGX Cloud Engineering Team as a Sr. Site Reliability Engineer. In this role, you will play a significant part in helping to craft and guide the future of AI & GPUs in the Cloud. NVIDIA DGX Cloud is a cloud platform tailored for AI tasks, enabling organizations to transition AI projects from development to deployment in the age of intelligent AI. Are you passionate about cloud software development and strive for quality? Do you pride yourself in building cloud-scale software systems? If so, join our team at NVIDIA, where we are dedicated to delivering GPU-powered services around the world! What you'll be doing: You will play a crucial role in ensuring the success of the Omniverse on DGX Cloud platform by helping to build our deployment infrastructure processes, creating world-class SRE measurement and creating automation tools to improve efficiency of operations, and maintaining a high standard of perfection in service operability and reliability. Design, build, and implement scalable cloud-based systems for PaaS/IaaS. Work closely with other teams on new products or features/improvements of existing products. Develop, maintain and improve cloud deployment of our software. Participate in the triage & resolution of complex infra-related issues Collaborate with developers, QA and Product teams to establish, refine and streamline our software release process, software observability to ensure service operability, reliability, availability. Maintain services once live by measuring and monitoring availability, latency, and overall system health using metrics, logs, and traces Develop, maintain and improve automation tools that can help improve efficiency of SRE operations Practice balanced incident response and blameless postmortems Be part of an on-call rotation to support production systems What we need to see: BS or MS in Computer Science or equivalent program from an accredited University/College. 8+ years of hands-on software engineering or equivalent experience. Demonstrate understanding of cloud design in the areas of virtualization and global infrastructure, distributed systems, and security. Expertise in Kubernetes (K8s) & KubeVirt and building RESTful web services. Understanding of building AI Agentic solutions preferably Nvidia open source AI solutions. Demonstrate working experiences in SRE principles like metrics emission for observability, monitoring, alerting using logs, traces and metrics Hands on experience working with Docker, Containers and Infrastructure as a Code like terraform deployment CI/CD. Exhibit knowledge in concepts of working with CSPs, for example: AWS (Fargate, EC2, IAM, ECR, EKS, Route53 etc...), Azure etc. Ways to stand out from the crowd: Expertise in technologies such as Stack-storm, OpenStack, Redhat OpenShift, AI DBs like Milvus. A track record of solving complex problems with elegant solutions. Prior experience with Go & Python, React. Demonstrate delivery of complex projects in previous roles. Showcase ability in developing Frontend application with concepts of SSA, RBAC We are an equal opportunity employer and value diversity at our company. We do not discriminate on the basis of race, religion, color, national origin, sex, gender, gender expression, sexual orientation, age, marital status, veteran status, or disability status. We will ensure that individuals with disabilities are provided reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request accommodation.
Posted 1 week ago
0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Job Description The engineer will also be responsible for analyzing the results of the tests and providing recommendations for performance improvements. Additionally, this role may also involve working with development teams to optimize the performance of their code and working with system administrators to ensure that the underlying infrastructure is properly configured for optimal performance. This role will usually require strong Java programming skills and experience with microservices, cloud infrastructure and technologies, and performance testing methodologies. Additionally, Engineer should be able to understand the Front end performance metrics and help teams in optimizing the performance scores. Responsibilities The specific responsibilities of a performance engineer managing a large, distributed application built on microservices, spring boot, and Google Cloud may include: Gather performance requirements using templates, logs, and monitoring tools. Work with Product teams to understand workload models for each system and gather performance Requirements. Create performance test plans and scenarios and develop test scripts in JMeter/K6/Gatling to meet the objectives of the performance test plan. Setup performance test and performance regression testing guidelines and standards Conduct system performance testing to ensure system reliability, capacity, and scalability. Perform performance testing like Load Testing, Endurance Testing, Volume Testing, Scalability Testing, Spike Testing, and Stress Testing using Jmeter/Load runner. Perform root cause analysis using performance monitoring/Profiling tools and identifying potential system and resources bottlenecks. Analyze thread dumps, heap dumps, kernel logs, network stats, APM metrics, application logs to troubleshoot CPU/Memory/Resource hot spots, API latency and application/platform health. Experience in Front end application performance tools like Lighthouse, Web page test, Pagespeed Insights, etc Collaborating with multiple product teams and help in performance tuning of applications. Shift left and first-time quality - Automate Performance testing and integrate it to the existing CI/CD pipelines for a better quality and engineering experience. Performance Testing Tools: Performance testing tools such as JMeter, LoadRunner, and Gatling. Knowledge of Web Technologies: It is essential to have knowledge of web technologies, including HTML, CSS, JavaScript, and HTTP. Strong analytical skills are necessary to interpret data and identify patterns, trends, and issues related to webpage load and performance. Communication Skills: Effective communication skills to collaborate with developers, testers, and other stakeholders to identify and resolve performance issues. Qualifications Experience in Front end application performance tools like Lighthouse, Web page test, Pagespeed Insights, etc Collaborating with multiple product teams and help in performance tuning of applications. Shift left and first-time quality - Automate Performance testing and integrate it to the existing CI/CD pipelines for a better quality and engineering experience. Performance Testing Tools: Performance testing tools such as JMeter, LoadRunner, and Gatling. Knowledge of Web Technologies: It is essential to have knowledge of web technologies, including HTML, CSS, JavaScript, and HTTP. Strong analytical skills are necessary to interpret data and identify patterns, trends, and issues related to webpage load and performance. Communication Skills: Effective communication skills to collaborate with developers, testers, and other stakeholders to identify and resolve performance issues.
Posted 1 week ago
0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Job Description Comfort level in following Python project management best practices (use of setup.py, logging, pytests, relative module imports,sphinx docs,etc.,) Familiarity in use of Github (clone, fetch, pull/push,raising issues and PR, etc.,) High familiarity in the use of DL theory/practices in NLP applications Comfort level to code in Huggingface, LangChain, Chainlit, Tensorflow and/or Pytorch, Scikit-learn, Numpy and Pandas Comfort level to use two/more of open source NLP modules like SpaCy, TorchText, fastai.text, farm-haystack, and others Knowledge in fundamental text data processing (like use of regex, token/word analysis, spelling correction/noise reduction in text, segmenting noisy unfamiliar sentences/phrases at right places, deriving insights from clustering, etc.,) Have implemented in real-world BERT/or other transformer fine-tuned models (Seq classification, NER or QA) from data preparation, model creation and inference till deployment Use of GCP services like BigQuery, Cloud function, Cloud run, Cloud Build, VertexAI, Good working knowledge on other open source packages to benchmark and derive summary Experience in using GPU/CPU of cloud and on-prem infrastructures Skillset to leverage cloud platform for Data Engineering, Big Data and ML needs. Use of Dockers (experience in experimental docker features, docker-compose, etc.,) Familiarity with orchestration tools such as airflow, Kubeflow Experience in CI/CD, infrastructure as code tools like terraform etc. Kubernetes or any other containerization tool with experience in Helm, Argoworkflow, etc., Ability to develop APIs with compliance, ethical, secure and safe AI tools. Good UI skills to visualize and build better applications using Gradio, Dash, Streamlit, React, Django, etc., Deeper understanding of javascript, css, angular, html, etc., is a plus. Responsibilities Design NLP/LLM/GenAI applications/products by following robust coding practices, Explore SoTA models/techniques so that they can be applied for automotive industry usecases Conduct ML experiments to train/infer models; if need be, build models that abide by memory & latency restrictions, Deploy REST APIs or a minimalistic UI for NLP applications using Docker and Kubernetes tools Showcase NLP/LLM/GenAI applications in the best way possible to users through web frameworks (Dash, Plotly, Streamlit, etc.,) Converge multibots into super apps using LLMs with multimodalities Develop agentic workflow using Autogen, Agentbuilder, langgraph Build modular AI/ML products that could be consumed at scale. Data Engineering: Skillsets to perform distributed computing (specifically parallelism and scalability in Data Processing, Modeling and Inferencing through Spark, Dask, RapidsAI or RapidscuDF) Ability to build python-based APIs (e.g.: use of FastAPIs/ Flask/ Django for APIs) Experience in Elastic Search and Apache Solr is a plus, vector databases. Qualifications Education : Bachelor’s or Master’s Degree in Computer Science, Engineering, Maths or Science Performed any modern NLP/LLM courses/open competitions is also welcomed.
Posted 1 week ago
5.0 years
10 - 15 Lacs
Bengaluru, Karnataka, India
On-site
This role is for one of Weekday's clients Salary range: Rs 1000000 - Rs 1500000 (ie INR 10-15 LPA) Min Experience: 5 years Location: Bengaluru JobType: full-time Requirements We are seeking a highly skilled and experienced Computer Vision Engineer to join our growing AI team. This role is ideal for someone with strong expertise in deep learning and a solid background in real-time video analytics, model deployment, and computer vision applications. You'll be responsible for developing scalable computer vision pipelines and deploying them across cloud and edge environments, helping build intelligent visual systems that solve real-world problems. Key Responsibilities: Model Development & Training: Design, train, and optimize deep learning models for object detection, segmentation, and tracking using frameworks like YOLO, UNet, Mask R-CNN, and Deep SORT. Computer Vision Applications: Build robust pipelines for computer vision applications including image classification, real-time object tracking, and video analytics using OpenCV, NumPy, and TensorFlow/PyTorch. Deployment & Optimization: Deploy trained models on Linux-based GPU systems and edge devices (e.g., Jetson Nano, Google Coral), ensuring low-latency performance and efficient hardware utilization. Real-Time Inference: Implement and optimize real-time inference systems, ensuring minimal delay in video processing pipelines. Model Management: Utilize tools like Docker, Git, and MLflow (or similar) for version control, environment management, and model lifecycle tracking. Collaboration & Documentation: Work cross-functionally with hardware, backend, and software teams. Document designs, architectures, and research findings to ensure reproducibility and scalability. Technical Expertise Required: Languages & Libraries: Advanced proficiency in Python and solid experience with OpenCV, NumPy, and other image processing libraries. Deep Learning Frameworks: Hands-on experience with TensorFlow, PyTorch, and integration with model training pipelines. Computer Vision Models: Object Detection: YOLO (all versions) Segmentation: UNet, Mask R-CNN Tracking: Deep SORT or similar Deployment Skills: Real-time video analytics implementation and optimization Experience with Docker for containerization Version control using Git Model tracking using MLflow or comparable tools Platform Experience: Proven experience in deploying models on Linux-based GPU environments and edge devices (e.g., NVIDIA Jetson family, Coral TPU). Professional & Educational Requirements: Education: B.E./B.Tech/M.Tech in Computer Science, Electrical Engineering, or related discipline. Experience: Minimum 5 years of industry experience in AI/ML with a strong focus on computer vision and system-level design. Proven portfolio of production-level projects in image/video processing or real-time systems. Preferred Qualities: Strong problem-solving and debugging skills Excellent communication and teamwork capabilities A passion for building smart, scalable vision systems A proactive and independent approach to research and implementation
Posted 1 week ago
5.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Overview We are seeking a skilled Associate Manager - AIOps & MLOps Operations to support and enhance the automation, scalability, and reliability of AI/ML operations across the enterprise. This role requires a solid understanding of AI-driven observability, machine learning pipeline automation, cloud-based AI/ML platforms, and operational excellence. The ideal candidate will assist in deploying AI/ML models, ensuring continuous monitoring, and implementing self-healing automation to improve system performance, minimize downtime, and enhance decision-making with real-time AI-driven insights. Support and maintain AIOps and MLOps programs, ensuring alignment with business objectives, data governance standards, and enterprise data strategy. Assist in implementing real-time data observability, monitoring, and automation frameworks to enhance data reliability, quality, and operational efficiency. Contribute to developing governance models and execution roadmaps to drive efficiency across data platforms, including Azure, AWS, GCP, and on-prem environments. Ensure seamless integration of CI/CD pipelines, data pipeline automation, and self-healing capabilities across the enterprise. Collaborate with cross-functional teams to support the development and enhancement of next-generation Data & Analytics (D&A) platforms. Assist in managing the people, processes, and technology involved in sustaining Data & Analytics platforms, driving operational excellence and continuous improvement. Support Data & Analytics Technology Transformations by ensuring proactive issue identification and the automation of self-healing capabilities across the PepsiCo Data Estate. Responsibilities Support the implementation of AIOps strategies for automating IT operations using Azure Monitor, Azure Log Analytics, and AI-driven alerting. Assist in deploying Azure-based observability solutions (Azure Monitor, Application Insights, Azure Synapse for log analytics, and Azure Data Explorer) to enhance real-time system performance monitoring. Enable AI-driven anomaly detection and root cause analysis (RCA) by collaborating with data science teams using Azure Machine Learning (Azure ML) and AI-powered log analytics. Contribute to developing self-healing and auto-remediation mechanisms using Azure Logic Apps, Azure Functions, and Power Automate to proactively resolve system issues. Support ML lifecycle automation using Azure ML, Azure DevOps, and Azure Pipelines for CI/CD of ML models. Assist in deploying scalable ML models with Azure Kubernetes Service (AKS), Azure Machine Learning Compute, and Azure Container Instances. Automate feature engineering, model versioning, and drift detection using Azure ML Pipelines and MLflow. Optimize ML workflows with Azure Data Factory, Azure Databricks, and Azure Synapse Analytics for data preparation and ETL/ELT automation. Implement basic monitoring and explainability for ML models using Azure Responsible AI Dashboard and InterpretML. Collaborate with Data Science, DevOps, CloudOps, and SRE teams to align AIOps/MLOps strategies with enterprise IT goals. Work closely with business stakeholders and IT leadership to implement AI-driven insights and automation to enhance operational decision-making. Track and report AI/ML operational KPIs, such as model accuracy, latency, and infrastructure efficiency. Assist in coordinating with cross-functional teams to maintain system performance and ensure operational resilience. Support the implementation of AI ethics, bias mitigation, and responsible AI practices using Azure Responsible AI Toolkits. Ensure adherence to Azure Information Protection (AIP), Role-Based Access Control (RBAC), and data security policies. Assist in developing risk management strategies for AI-driven operational automation in Azure environments. Prepare and present program updates, risk assessments, and AIOps/MLOps maturity progress to stakeholders as needed. Support efforts to attract and build a diverse, high-performing team to meet current and future business objectives. Help remove barriers to agility and enable the team to adapt quickly to shifting priorities without losing productivity. Contribute to developing the appropriate organizational structure, resource plans, and culture to support business goals. Leverage technical and operational expertise in cloud and high-performance computing to understand business requirements and earn trust with stakeholders. Qualifications 5+ years of technology work experience in a global organization, preferably in CPG or a similar industry. 5+ years of experience in the Data & Analytics field, with exposure to AI/ML operations and cloud-based platforms. 5+ years of experience working within cross-functional IT or data operations teams. 2+ years of experience in a leadership or team coordination role within an operational or support environment. Experience in AI/ML pipeline operations, observability, and automation across platforms such as Azure, AWS, and GCP. Excellent Communication: Ability to convey technical concepts to diverse audiences and empathize with stakeholders while maintaining confidence. Customer-Centric Approach: Strong focus on delivering the right customer experience by advocating for customer needs and ensuring issue resolution. Problem Ownership & Accountability: Proactive mindset to take ownership, drive outcomes, and ensure customer satisfaction. Growth Mindset: Willingness and ability to adapt and learn new technologies and methodologies in a fast-paced, evolving environment. Operational Excellence: Experience in managing and improving large-scale operational services with a focus on scalability and reliability. Site Reliability & Automation: Understanding of SRE principles, automated remediation, and operational efficiencies. Cross-Functional Collaboration: Ability to build strong relationships with internal and external stakeholders through trust and collaboration. Familiarity with CI/CD processes, data pipeline management, and self-healing automation frameworks. Strong understanding of data acquisition, data catalogs, data standards, and data management tools. Knowledge of master data management concepts, data governance, and analytics.
Posted 1 week ago
4.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Candidates who are ready to come for Face to face interview -Open to apply 5 Days Working from office RESPONSIBILITIES * Architect and implement REST APIs. * Design and implement low-latency, high-availability, and performant applications. * Lead integration of GenAI-powered capabilities into backend systems, including prompt-based APIs and tool-using agents. * Test software to ensure responsiveness, correctness and efficiency. * Collaborate with front-end developers on the integration of elements. * Take initiatives to build better and faster solutions to the problems of scale. * Troubleshoot application-related issues and work with the infrastructure team to triage Major Incidents. * Experience in analyzing/researching solutions and developing/implementing recommendations accordingly. REQUIREMENTS * Bachelor’s Degree in Computer Science or a related field. * 4+ years of professional experience. * Expertise in JavaScript (ES6) and Node JS is a must. * Strong knowledge of algorithms and data structures. * In-depth knowledge of frameworks like Express JS and Restify. * Experience in building backend services for handling data at large scale is a must. * Ability to architect high-availability applications and servers on cloud adhering to best practices. Microservices architecture is preferable. * Experience working with MySQL and NoSQL databases like DynamoDB and MongoDB. * Experience with LangChain, LangGraph, CrewAI, or equivalent GenAI agentic frameworks is a strong plus. * Experience writing complex SQL queries. * In-depth knowledge of Node JS concepts like the event loop. * Ability to perform technical deep-dives into code. * Experience building and deploying GenAI-powered backend services or tools (e.g., prompt routers, embedding search, RAG pipelines). * Understanding of communication using WebSockets. * Good understanding of clean architecture and SOLID principles. * Basic knowledge of AWS and Docker containerization is preferable. * Experience in Python is a plus. * Experience with AI-assisted development tools (e.g., Cursor, GitHub Copilot) to accelerate development, assist in code reviews, and support efficient coding practices is a plus. * Familiarity with Git.
Posted 1 week ago
12.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Required Skills & Experience: 7–12 years of hands-on recruitment experience with at least 3+ years in BFSI/fintech/stock broking . Proven ability to close niche and critical roles (e.g., quant, algo devs, traders, compliance, sales leaders). Lead the entire recruitment lifecycle from requisition to onboarding for tech, trading, sales, research, and support roles. Strategise and execute hiring plans aligned with business objectives and headcount forecasts. Drive niche hiring for quant roles, algo trading, low-latency development, and broking operations. Manage and mentor a team of recruiters and recruitment partners. Implement and optimise ATS, dashboards , and recruitment automation tools. Build and nurture a pipeline of high-potential candidates through proactive sourcing, referrals, and headhunting. Manage campus hiring from premier institutions (IITs, ISI, NITs, BITS, IIMs, etc.) Own recruitment KPIs such as TAT, cost-per-hire, quality-of-hire, and offer-to-join ratio. Ensure compliance with hiring processes, internal audits, and SEBI/regulatory guidelines If you're interested or know someone suitable, please email your resume to: 📧 nayan.ray@impetusconsultants.com
Posted 1 week ago
12.0 years
0 Lacs
Thane, Maharashtra, India
On-site
We are looking for a Director of Engineering (AI Systems & Secure Platforms) to join our Core Engineering team at Thane (Maharashtra - India). The ideal candidate should have 12-15+ years of experience in architecting and deploying AI systems at scale, with deep expertise in agentic AI workflows , LLMs, RAG, Computer Vision, and secure mobile/wearable platforms. Join us to craft the next generation of smart eyewear—by leading intelligent, autonomous, real-time workflows that operate seamlessly at the edge. Read more here: The smartphone era is peaking. The next computing revolution is here. Top 3 Daily Tasks: Architect, optimize, and deploy LLMs, RAG pipelines, and Computer Vision models for smart glasses and other edge devices Design and orchestrate agentic AI workflows—enabling autonomous agents with planning, tool usage, error handling, and closed feedback loops Collaborate across AI, Firmware, Security, Mobile, Product, and Design teams to embed "invisible intelligence" within secure wearable systems Minimum Work Experience Required: 12-15+ years of experience in Applied AI, Deep Learning, Edge AI deployment, Secure Mobile Systems, and Agentic AI Architecture . Top 5 Skills You Should Possess: Expertise in TensorFlow, PyTorch, HuggingFace, ONNX, and optimization tools like TensorRT, TFLite Strong hands-on experience with LLMs, Retrieval-Augmented Generation (RAG), and Vector Databases (FAISS, Milvus) Deep understanding of Android/iOS integration, AOSP customization, and secure communication (WebRTC, SIP, RTP) Experience in Privacy-Preserving AI (Federated Learning, Differential Privacy) and secure AI APIs Proven track record in architecting and deploying agentic AI systems—multi-agent workflows, adaptive planning, tool chaining, and MCP (Model Context Protocol) Cross-Functional Collaboration Excellence: Partner with Platform & Security teams to define secure MCP server blueprints exposing device tools, sensors, and services with strong governance and traceability Coordinate with Mobile and AI teams to integrate agentic workflows across Android, iOS, and AOSP environments Work with Firmware and Product teams to define real-time sensor-agent interactions, secure data flows, and adaptive behavior in smart wearables What You'll Be Creating: Agentic, MCP-enabled pipelines for smart glasses—featuring intelligent agents for vision, context, planning, and secure execution Privacy-first AI systems combining edge compute, federated learning, and cloud integration A scalable, secure wearable AI platform that reflects our commitment to building purposeful and conscious technology Preferred Skills: Familiarity with secure real-time protocols: WebRTC, SIP, RTP Programming proficiency in C, C++, Java, Python, Swift, Kotlin, Objective-C, Node.js, Shell Scripting, CUDA (preferred) Experience designing AI platforms for wearables/XR with real-time and low-latency constraints Deep knowledge of MCP deployment patterns—secure token handling, audit trails, permission governance Proven leadership in managing cross-functional tech teams across AI, Firmware, Product, Mobile, and Security
Posted 1 week ago
3.0 years
8 - 12 Lacs
Gandhinagar, Gujarat, India
On-site
Job Title: Python/Django Developer Location: Gandhinagar GIFT City Job Type: Full-Time (Hybrid) Experience: 3+ Years Job Summary We are seeking a skilled Python/Django Developer to join our team. The ideal candidate will be responsible for managing the interchange of data between the server and the users. Your primary focus will be the development of all server-side logic, ensuring high performance and responsiveness to front-end requests. You will also work closely with front-end developers to integrate user-facing elements into the application. A basic understanding of front-end technologies is required. Key Responsibilities Develop and maintain efficient, reusable, and reliable Python code. Design and implement low-latency, high-availability, and performant applications. Integrate user-facing elements developed by front-end developers with server-side logic. Ensure security and data protection standards are implemented. Integrate data storage solutions such as MySQL and MongoDB.Optimize applications for maximum speed and scalability. Collaborate with other team members and stakeholders to develop scalable solutions. Write unit and integration tests to ensure software quality. Debug and resolve application issues promptly. Maintain code integrity and organization using version control tools like Git. Key Requirements Proficiency in Python with hands-on experience in at least one web framework such as Django or Flask. Strong knowledge of Object Relational Mapper (ORM) libraries. Experience integrating multiple data sources and databases into one system. Understanding of Python’s threading limitations and multi-process architecture. Good understanding of server-side templating languages such as Jinja2 or Mako. Basic knowledge of front-end technologies like JavaScript, HTML5, and CSS3. Strong grasp of security and data protection best practices. Experience with user authentication and authorization across multiple systems, servers, and environments. Solid understanding of fundamental design principles for scalable applications. Experience with event-driven programming in Python. Ability to design and implement MySQL database schemas that support business processes. Strong unit testing and debugging skills. Proficiency in Git for code versioning and collaboration. Preferred Qualifications Experience with cloud platforms like AWS, Azure, or Google Cloud. Familiarity with containerization tools like Docker. Knowledge of RESTful APIs and microservices architecture. Experience working in Agile development environments. Skills: azure,aws lambda,backend apis,mako,backend development,css3,docker,django,google cloud,aws,amazon web services (aws),javascript,git,restful architecture,jinja2,python,mongodb,devops,microservices,html5,flask,restful apis,mongodb inc.,python scripting,mysql
Posted 1 week ago
3.0 years
8 - 12 Lacs
Ahmedabad, Gujarat, India
On-site
Job Title: Python/Django Developer Location: Gandhinagar GIFT City Job Type: Full-Time (Hybrid) Experience: 3+ Years Job Summary We are seeking a skilled Python/Django Developer to join our team. The ideal candidate will be responsible for managing the interchange of data between the server and the users. Your primary focus will be the development of all server-side logic, ensuring high performance and responsiveness to front-end requests. You will also work closely with front-end developers to integrate user-facing elements into the application. A basic understanding of front-end technologies is required. Key Responsibilities Develop and maintain efficient, reusable, and reliable Python code. Design and implement low-latency, high-availability, and performant applications. Integrate user-facing elements developed by front-end developers with server-side logic. Ensure security and data protection standards are implemented. Integrate data storage solutions such as MySQL and MongoDB.Optimize applications for maximum speed and scalability. Collaborate with other team members and stakeholders to develop scalable solutions. Write unit and integration tests to ensure software quality. Debug and resolve application issues promptly. Maintain code integrity and organization using version control tools like Git. Key Requirements Proficiency in Python with hands-on experience in at least one web framework such as Django or Flask. Strong knowledge of Object Relational Mapper (ORM) libraries. Experience integrating multiple data sources and databases into one system. Understanding of Python’s threading limitations and multi-process architecture. Good understanding of server-side templating languages such as Jinja2 or Mako. Basic knowledge of front-end technologies like JavaScript, HTML5, and CSS3. Strong grasp of security and data protection best practices. Experience with user authentication and authorization across multiple systems, servers, and environments. Solid understanding of fundamental design principles for scalable applications. Experience with event-driven programming in Python. Ability to design and implement MySQL database schemas that support business processes. Strong unit testing and debugging skills. Proficiency in Git for code versioning and collaboration. Preferred Qualifications Experience with cloud platforms like AWS, Azure, or Google Cloud. Familiarity with containerization tools like Docker. Knowledge of RESTful APIs and microservices architecture. Experience working in Agile development environments. Skills: azure,aws lambda,backend apis,mako,backend development,css3,docker,django,google cloud,aws,amazon web services (aws),javascript,git,restful architecture,jinja2,python,mongodb,devops,microservices,html5,flask,restful apis,mongodb inc.,python scripting,mysql
Posted 1 week ago
3.0 years
0 Lacs
Greater Chennai Area
On-site
Job ID: 39582 Position Summary A rewarding career at HID Global beckons you! We are looking for an AI/ML Engineer , who is responsible for designing, developing, and deploying advanced AI/ML solutions to solve complex business challenges. This role requires expertise in machine learning, deep learning, MLOps, and AI model optimization , with a focus on building scalable, high-performance AI systems. As an AI/ML Engineer , you will work closely with data engineers, software developers, and business stakeholders to integrate AI-driven insights into real-world applications. You will be responsible for model development, system architecture, cloud deployment, and ensuring responsible AI adoption . We are a leading company in the trusted source for innovative HID Global Human Resources products, solutions and services that help millions of customers around the globe create, manage and use secure identities. Roles & Responsibilities: Design, develop, and deploy robust & scalable AI/ML models in Production environments. Collaborate with business stakeholders to identify AI/ML opportunities and define measurable success metrics. Design and build Retrieval-Augmented Generation (RAG) pipelines integrating vector stores, semantic search, and document parsing for domain-specific knowledge retrieval. Integrate Multimodal Conversational AI platforms (MCP) including voice, vision, and text to deliver rich user interactions. Drive innovation through PoCs, benchmarking, and experiments with emerging models and architectures. Optimize models for performance, latency and scalability. Build data pipelines and workflows to support model training and evaluation. Conduct research & experimentation on the state-of-the-art techniques (DL, NLP, Time series, CV) Partner with MLOps and DevOps teams to implement best practices in model monitoring, version and re-training. Lead code reviews, architecture discussions and mentor junior & peer engineers. Architect and implement end-to-end AI/ML pipelines, ensuring scalability and efficiency. Deploy models in cloud-based (AWS, Azure, GCP) or on-premises environments using tools like Docker, Kubernetes, TensorFlow Serving, or ONNX Ensure data integrity, quality, and preprocessing best practices for AI/ML model development. Ensure compliance with AI ethics guidelines, data privacy laws (GDPR, CCPA), and corporate AI governance. Work closely with data engineers, software developers, and domain experts to integrate AI into existing systems. Conduct AI/ML training sessions for internal teams to improve AI literacy within the organization. Strong analytical and problem solving mindset. Technical Requirements: Strong expertise in AI/ML engineering and software development. Strong experience with RAG architecture, vector databases Proficiency in Python and hands-on experience in using ML frameworks (tensorflow, pytorch, scikit-learn, xgboost etc) Familiarity with MCPs like Google Dialogflow, Rasa, Amazon Lex, or custom-built agents using LLM orchestration. Cloud-based AI/ML experience (AWS Sagemaker, Azure ML, GCP Vertex AI, etc.). Solid understanding of AI/ML life cycle – Data preprocessing, feature engineering, model selection, training, validation and deployment. Experience in production grade ML systems (Model serving, APIs, Pipelines) Familiarity with Data engineering tools (SPARK, Kafka, Airflow etc) Strong knowledge of statistical modeling, NLP, CV, Recommendation systems, Anomaly detection and time series forecasting. Hands-on in Software engineering with knowledge of version control, testing & CI/CD Hands-on experience in deploying ML models in production using Docker, Kubernetes, TensorFlow Serving, ONNX, and MLflow. Experience in MLOps & CI/CD for ML pipelines, including monitoring, retraining, and model drift detection. Proficiency in scaling AI solutions in cloud environments (AWS, Azure & GCP). Experience in data preprocessing, feature engineering, and dimensionality reduction. Exposure to Data privacy, Compliance and Secure ML practices Education and/or Experience: Graduation or master’s in computer science or information technology or AI/ML/Data science 3+ years of hands-on experience in AI/ML development/deployment and optimization Experience in leading AI/ML teams and mentoring junior engineers.
Posted 1 week ago
20.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
At Index Exchange, we’re reinventing how digital advertising works—at scale. As a global advertising supply-side platform, we empower the world’s leading media owners and marketers to thrive in a programmatic, privacy-first ecosystem. We’re a proud industry pioneer with over 20 years of experience accelerating the ad technology evolution. Our proprietary tech is trusted by some of the world’s largest brands and media owners and plays a crucial role in keeping the internet open, accessible, and largely free. We process more than 550 billion real-time auctions every day (in comparison, Google processes 8.5 billion searches per day) with ultra-low latency. Our platform is vertically integrated from servers to networks and runs primarily on our own metal and cloud infrastructure. This end-to-end infrastructure is designed to provide both stability and agility, enabling us to adapt quickly as the market evolves. At the core of it all is our engineering-first culture. Our engineers tackle internet-scale problems across tight-knit, global teams. From moving petabytes of data and optimizing with AI to making real-time infrastructure decisions, Indexers have the agency and influence to shape the future of advertising. We move fast, build thoughtfully, and stay grounded in our core values. About The Role We are seeking a Senior Network Engineer with a proven track record to further the development of our next-generation network architectures. This role will entail deploying and operating advanced networking solutions with a strong emphasis on high availability, low-latency, and security measures. This position reports directly to the Engineering Lead Manager, Networking based in Canada and will work closely with members of the Technical Operations team. Here’s What You’ll Be Doing Network troubleshooting to isolate and diagnose network problems Analyzing business requirements to develop technical network solutions and their framework Developing implementation plans, test plans and project timelines for various projects Working with technology vendors Staying abreast of how technology infrastructures are currently impacting and driving competitors Writing function requirements/specifications documents Enhancing operational efficiency and quality by implementing network automation practices to streamline processes. Solving complex problems with many variables Participating in a 24x7 on-call rotation to provide timely response and resolution to network incidents and emergencies Here's What You Need 8-10+ years’ experience in network design, operations, and support Exceptional written and verbal communication skills Demonstrated expertise in many of the following protocols and technologies: TCP/IP, BGP, IPv6, QoS, Netflow, EVPN, VXLAN, DMVPN, GRE. Stong expertise in Routing, Switching, Enterprise, and Data Center networking with Cisco (NXOS, IOS-XE, and IOS-XR) and Arista (EOS) platforms. Experience with Arista’s AVD and CVP is a plus. Experience with L4-L7 load balancing solutions, such as Netscalers and HAProxy, Nginx. Expertise with Cisco security solutions (ASA, FirePower, Anyconnect), as well as other security vendors such as Palo Alto or Fortinet, demonstrating an understanding of network security principles and technologies. Familiarity with network automation using tools such as Ansible, Python, and Nornir to support network provisioning, configuration management, and troubleshooting. Understanding of Kubernetes networking concepts and Container Network Interface (CNI) standards. Networking certifications such as Cisco Certified (CCIE, CCNP) in at least one of the following is a plus: Data Centre, Enterprise, Security, DevNet. Knowledge of Linux and scripting Why You’ll Love Working Here Comprehensive health, dental, and vision plans for you and your dependents Paid time off, health days, and personal obligation days plus flexible work schedules Competitive retirement matching plans Equity packages Company contribution to Provident Fund Monthly internet stipend Generous parental leave available to birthing, non-birthing, and adoptive parents Annual well-being allowance plus fitness discounts and group wellness activities Employee assistance program Mental health first aid program that provides an in-the-moment point of contact and reassurance One day of volunteer time off per year and a donation-matching program Bi-weekly town halls and regular community-led team events Multiple resources and programming to support continuous learning A workplace that supports a diverse, equitable, and inclusive environment – learn more here Equal employment opportunity At Index Exchange, we believe that successful products are built by teams just as diverse as the audience who uses them. As such, we are committed to equal employment opportunities. We celebrate diversity of race, color, ancestry, religion, sex, national origin, sexual orientation, age, citizenship, marital status, disability, gender identity or expression, or veteran status. Additionally, we realize that diversity is deeper than any status or classification—diversity is the human experience. For those who show grit, passion, and humility—Index will welcome you. Accessibility For Applicants With Disabilities Index Exchange welcomes and encourages individuals with disabilities to apply to work with us. If you require an accommodation, please share the details of your request and any information how we can assist you with the hiring recruiter when they contact you. Index Exchange will make reasonable efforts to ensure accommodation requests are met throughout the recruitment process. Index Everywhere, Index Anywhere Our corporate headquarters are in Toronto, with major offices in New York, Montreal, Kitchener, London, San Francisco, and many other global cities. As a major global advertising exchange, we are committed to operating as a tightly knit global team and embracing and empowering talent wherever our colleagues may be.
Posted 1 week ago
2.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Job Description: Node.js Backend Developer Location: Sector - 132, Noida Employment Type: Full-Time Alphadroid is a global leading Robotics and AI venture with strong presence within India, UK, US and Middle-east. The company aims to be the global robotics leader through its innovative solutions and innovation in front and middle office businesses. Job Description We are seeking an experienced Node.js Developer with 2+ years of hands-on experience, including strong expertise in TypeScript, to join our dynamic team. The ideal candidate will be responsible for designing and implementing scalable, high-performance, and secure backend services in a microservices and multi-tenant architecture. Your role will focus on backend-driven architecture, ensuring low-latency and efficient systems. You will work on integrating third-party services, implementing caching mechanisms, and developing role-based access systems. Responsibilities: ● Develop and maintain server-side applications in a microservices-based architecture using Node.js and TypeScript. ● Design and implement low-latency, high-availability, and secure systems. ● Work with NestJS and TypeScript to build scalable and performant applications. ● Implement caching strategies using Redis or other caching technologies to enhance application performance. ● Integrate third-party services (APIs, payment gateways, etc.) into backend systems. ● Ensure proper role-based access control (RBAC) and secure data management. ● Collaborate with front-end developers to integrate user-facing elements with backend logic. ● Design and document RESTful APIs using Swagger and ensure consistent API standards. ● Create and maintain database schemas to support multi-tenant architecture and complex business processes. ● Optimize applications for maximum performance and scalability. ● Work with various data storage solutions (SQL, NoSQL) for seamless data integration. ● Implement and maintain automated testing frameworks and unit tests. ● Stay up-to-date with emerging technologies and best practices for building secure systems. Skills & Qualifications: ● 2+ years of professional experience in Node.js development. ● Strong proficiency in TypeScript and Node.js frameworks, particularly NestJS. ● Experience with microservices architecture and backend-driven application development. ● Proficient in implementing caching mechanisms like Redis for performance improvement. ● Experience with role-based access control (RBAC) and security best practices. ● Hands-on experience integrating third-party services and APIs (e.g., payment gateways). ● Strong understanding of asynchronous programming and its workarounds. ● Familiarity with Swagger for API documentation and design. ● Knowledge of building secure, scalable, and low-latency systems. ● Experience working with databases such as MySQL, PostgreSQL, or MongoDB. ● Familiarity with version control tools like Git. ● Basic understanding of front-end technologies (HTML5, CSS3) for smooth integration. ● Excellent problem-solving skills and ability to work in a collaborative environment. Education: ● Bachelor’s degree in Computer Science, Engineering, or a related field is preferred. If you are passionate about building cutting-edge, scalable backend systems with TypeScript in a fast-paced environment, we’d love to hear from you!
Posted 1 week ago
4.0 years
0 Lacs
Ahmedabad, Gujarat, India
Remote
At SmartBear, we believe building great software starts with quality—and we’re helping our customers make that happen every day. Our solution hubs—SmartBear API Hub, SmartBear Insight Hub, and SmartBear Test Hub, featuring HaloAI, bring visibility and automation to software development, making it easier for teams to deliver high-quality software faster. SmartBear is trusted by over 16 million developers, testers, and software engineers at 32,000+ organizations – including innovators like Adobe, JetBlue, FedEx, and Microsoft. . Software Engineer – JAVA Zephyr Enterprise Solve challenging business problems and build highly scalable applications Design, document and implement new systems in Java 8/17 Build microservices, specifically with HTTP, REST, JSON and XML Product intro: Zephyr Enterprise is undergoing a transformation to better align our products to the end users’ requirements while maintaining our market leading position and strong brand reputation across the Test Management Vertical. Go to our product page if you want to know more about Zephyr Test Management Products | SmartBear . You can even have a free trial to check it out 😊 About the role: As a Software Engineer, you will be integral part of this transformation and will be solving challenging business problems and build highly scalable and available applications that provide an excellent user experience. Reporting into the Lead Engineer you will be required to develop solutions using available tools and technologies and assist the engineering team in problem resolution by hands-on participation, effectively communicate status, issues, and risks in a precise and timely manner. You will write code per product requirements and create new products, create automated tests, contribute in system testing, follow agile mode of development. You will interact with both business and technical stakeholders to deliver high quality products and services that meet business requirements and expectations while applying the latest available tools and technology. Develop scalable real-time low-latency data egress/ingress solutions in an agile delivery method, create automated tests, contribute in system testing, follow agile mode of development. We are looking for someone who can design, document, and implement new systems, as well as enhancements and modifications to existing software with code that complies with design specifications and meets security and Java best practices. Have 4-7 years of experience with hands on experience working in Java 17 platform or higher and hold a Bachelor’s Degree in Computer Science, Computer Engineering or related technical field required. API - driven development - Experience working with remote data via SOAP, REST and JSON and in delivering high value projects in Agile (SCRUM) methodology using preferably JIRA tool. Good Understanding of OOAD, Spring Framework and the Microservices based architecture Experience with Applications Performance Tuning, Scaling, Security, Resiliency Best Practices Experience with Relational or NoSQL database, level design and core Java patterns Experience with AWS stack, RDS, S3, Elastic cache, SSDLC, Agile methodologies and development experience in a SCRUM environment. Experience with Messaging Queue preferably RabbitMQ, ActiveMQ/Artemis. Experience with Atlassian suite of Products and the related ecosystem of Plugins Experience with React, JavaScript good to have. Why you should join the SmartBear crew: You can grow your career at every level. We invest in your success as well as the spaces where our teams come together to work, collaborate, and have fun. We love celebrating our SmartBears; we even encourage our crew to take their birthdays off. We are guided by a People and Culture organization - an important distinction for us. We think about our team holistically – the whole person. We celebrate our differences in experiences, viewpoints, and identities because we know it leads to better outcomes. Did you know: Our main goal at SmartBear is to make our technology-driven world a better place. SmartBear is committed to ethical corporate practices and social responsibility, promoting good in all the communities we serve. SmartBear is headquartered in Somerville, MA with offices across the world including Galway Ireland, Bath, UK, Wroclaw, Poland and Bangalore, India. We’ve won major industry (product and company) awards including B2B Innovators Award, Content Marketing Association, IntellyX Digital Innovator and Built-in Best Places to Work. SmartBear is an equal employment opportunity employer and encourages success based on our individual merits and abilities without regard to race, color, religion, gender, national origin, ancestry, mental or physical disability, marital status, military or veteran status, citizenship status, age, sexual orientation, gender identity or expression, genetic information, medical condition, sex, sex stereotyping, pregnancy (which includes pregnancy, childbirth, and medical conditions related to pregnancy, childbirth, or breastfeeding), or any other legally protected status.
Posted 1 week ago
3.0 years
0 Lacs
Greater Bengaluru Area
On-site
About Us: Tejas Networks is a global broadband, optical and wireless networking company, with a focus on technology, innovation and R&D. We design and manufacture high-performance wireline and wireless networking products for telecommunications service providers, internet service providers, utilities, defence and government entities in over 75 countries. Tejas has an extensive portfolio of leading-edge telecom products for building end-to-end telecom networks based on the latest technologies and global standards with IPR ownership. We are a part of the Tata Group, with Panatone Finvest Ltd. (a subsidiary of Tata Sons Pvt. Ltd.) being the majority shareholder. Tejas has a rich portfolio of patents and has shipped more than 900,000 systems across the globe with an uptime of 99.999%. Our product portfolio encompasses wireless technologies (4G/5G based on 3GPP and O-RAN standards), fiber broadband (GPON/XGS-PON), carrier-grade optical transmission (DWDM/OTN), packet switching and routing (Ethernet, PTN, IP/MPLS) and Direct-to-Mobile and Satellite-IoT communication platforms. Our unified network management suite simplifies network deployments and service implementation across all our products with advanced capabilities for predictive fault detection and resolution. As an R&D-driven company, we recognize that human intelligence is a core asset that drives the organization’s long-term success. Over 60% of our employees are in R&D, we are reshaping telecom networks, one innovation at a time. Why Tejas: We are on a journey to connect the world with some of the most innovative products and solutions in the wireless and wireline optical networking domains. Would you like to be part of this journey and do something truly meaningful? Challenge yourself by working in Tejas’ fast-paced, autonomous learning environment and see your output and contributions become a part of live products worldwide. At Tejas, you will have the unique opportunity to work with cutting-edge technologies, alongside some of the industry’s brightest minds. From 5G to DWDM/ OTN, Switching and Routing, we work on technologies and solutions that create a connected society. Our solutions power over 500 networks across 75+ countries worldwide, and we’re constantly pushing boundaries to achieve more. If you thrive on taking ownership, have a passion for learning and enjoy challenging the status quo, we want to hear from you! About Team: This team is responsible for Platform and software validation for the entire product portfolio. They will develop automation Framework for the entire product portfolio. Team will develop and deliver customer documentation and training solutions. Compliance with technical certifications such as TL9000 and TSEC is essential for ensuring industry standards and regulatory requirements are met. Team works closely with PLM, HW and SW architects, sales and customer account teams to innovate and develop network deployment strategy for a broad spectrum of networking products and software solutions. As part of this team, you will get an opportunity to validate, demonstrate and influence new technologies to shape future optical, routing, fiber broadband and wireless networks. Roles & Responsibilities: Design and implement system solutions , propose process alternatives , and enhance business viewpoints to adopt standard solutions. Specify and design end-to-end solutions with high- and low-level architecture design to meet customer needs. Apply solution architecture standards, processes, and principles to maintain solution integrity, ensuring compliance with client requirements . Develop full-scope solutions , working across organizations to achieve operational success. Research, design, plan, develop, and evaluate effective solutions in specialized domains to meet customer requirements and outcomes . Solve complex technical challenges and develop innovative solutions that impact business performance. Mandatory skills: Around 3 to 6 Years Strong expertise in Cloud-Native, Microservices, and Virtualization technologies such as Docker, Kubernetes, OpenShift, and VMware . Experience in Istio or Nginx Ingress, Load balancer, OVS, SRIOV and dpdk etc. Hands-on experience in creating Kubernetes clusters , virtual machines, virtual networks & bridges in bare metal servers . Expertise in server virtualization techniques such as VMware, Red Hat OpenStack, KVM . Solid understanding of cloud concepts , including Virtualization, Hypervisors, Networking, and Storage . Knowledge of software development methodologies, build tools, and product lifecycle management . Experience in creating and updating Helm charts for carrier-grade deployments. Deep understanding of IP networking in both physical and virtual environments . Implementation of high availability, scalability, and disaster recovery measures . Proficiency in Python/Shell scripting (preferred). Experience in automation scripting using Ansible and Python for tasks such as provisioning, monitoring, and configuration management . Desired skills: Ability to debug applications and infrastructure to ensure low latency and high availability . Collaboration with cross-functional teams to resolve escalated incidents and ensure seamless operations on deployed cloud platforms . Preferred Qualifications: Bachelor’s or Master’s degree in Computer Science, Information Technology, or a related field . Certifications in Kubernetes (CKA/CKS) Or OpenShift is a plus. Experience working in 5G Core networks or telecom industry solutions is advantageous. Diversity and Inclusion Statement : Tejas Networks is an equal opportunity employer. We celebrate diversity and are committed to creating all-inclusive environment for all employees. We welcome applicants of all backgrounds regardless of race color, religion, gender, sexual orientation, age or veteran status. Our goal is to build a workforce that reflects the diverse communities we serve and to ensure every employee feels valued and respected.
Posted 1 week ago
6.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Job Title : DevOps Engineer (2–6Years Experience) own premise trading infrastructure Location : Gurgaon Experience –2- 6years of DevOps experience (fintech/trading domain preferred) Industry : Fintech / Algorithmic Trading ✅ Requirements: Strong knowledge of Linux, Python scripting, and SQL Experience with Docker, Ansible, GitLab/Jenkins CI/CD Familiarity with Airflow, monitoring & log management tools 🚀 We’re Hiring: DevOps Engineer Are you passionate about building scalable, reliable infrastructure in a fast-paced fintech environment? We're looking for a DevOps Engineer with 2–6 years of experience to join our high-performance tech team. 🔹 About the Company We’re a high-growth fintech firm operating at the intersection of technology and global markets. Our core focus lies in algorithmic trading, where we build scalable systems that power intelligent, real-time trading strategies. If you enjoy solving infrastructure challenges in a low-latency, data-heavy environment, this is the place for you. 🔧 Responsibilities: Manage Linux-based trading infrastructure Automate deployments using Ansible, Docker & CI/CD (Jenkins/GitLab) Handle Airflow workflows, Python scripts, monitoring & alerting systems Administer databases, write SQL queries, and support analytics Implement observability tools: Prometheus, Grafana, ELK, Loki Collaborate with cross-functional teams to maintain uptime & performance Proactively resolve production issues and support live trading systems Nice to Have: NISM certification Experience with NSE/global market trading environments Excellent troubleshooting skills and ownership mindset Willingness to support global trading ops and occasional night shifts
Posted 1 week ago
2.0 years
8 - 13 Lacs
Chennai, Tamil Nadu, India
On-site
This role is for one of the Weekday's clients Salary range: Rs 800000 - Rs 1300000 (ie INR 8-13 LPA) Min Experience: 2 years Location: Chennai, Tamilnadu JobType: full-time We are seeking a skilled and motivated Backend Software Engineer with expertise in Java, Spring Boot, AWS, distributed systems, and platform-as-a-service (PaaS) environments. This role is ideal for someone with experience in product-based companies or startups who enjoys tackling complex backend challenges and building scalable, high-performance systems. Requirements Key Responsibilities: Design, develop, and maintain scalable backend systems using Java, Spring Boot, and AWS. Optimize backend performance and reliability for distributed, high-traffic, low-latency applications. Collaborate with cross-functional teams to gather requirements and deliver robust technical solutions. Write clean, maintainable, and efficient code, and actively participate in code reviews. Continuously explore and apply new technologies to enhance backend system design and functionality. Requirements: Bachelor's or Master's degree in Computer Science, Engineering, or a related field. Minimum 2 years of experience in Java development with a strong foundation in Core Java and system design principles. Proficiency in Spring Boot and building microservices-based architectures. Hands-on experience with AWS services such as EC2, S3, RDS, and Lambda for application deployment and scalability. Experience in designing and maintaining distributed systems in performance-critical environments. Strong grasp of data structures and algorithms, with the ability to apply them effectively to solve real-world engineering problems. Practical experience in PaaS environments, designing solutions that utilize cloud platform services. Excellent problem-solving skills and the ability to thrive in a fast-paced, agile development environment. Key Skills: Java | Spring Boot | AWS | Distributed Systems | Data Structures | PaaS | Backend Development | Microservices
Posted 1 week ago
0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
About Us: Tejas Networks is a global broadband, optical and wireless networking company, with a focus on technology, innovation and R&D. We design and manufacture high-performance wireline and wireless networking products for telecommunications service providers, internet service providers, utilities, defence and government entities in over 75 countries. Tejas has an extensive portfolio of leading-edge telecom products for building end-to-end telecom networks based on the latest technologies and global standards with IPR ownership. We are a part of the Tata Group, with Panatone Finvest Ltd. (a subsidiary of Tata Sons Pvt. Ltd.) being the majority shareholder. Tejas has a rich portfolio of patents and has shipped more than 900,000 systems across the globe with an uptime of 99.999%. Our product portfolio encompasses wireless technologies (4G/5G based on 3GPP and O-RAN standards), fiber broadband (GPON/XGS-PON), carrier-grade optical transmission (DWDM/OTN), packet switching and routing (Ethernet, PTN, IP/MPLS) and Direct-to-Mobile and Satellite-IoT communication platforms. Our unified network management suite simplifies network deployments and service implementation across all our products with advanced capabilities for predictive fault detection and resolution. As an R&D-driven company, we recognize that human intelligence is a core asset that drives the organization’s long-term success. Over 60% of our employees are in R&D, we are reshaping telecom networks, one innovation at a time. Why Join Tejas: We are on a journey to connect the world with some of the most innovative products and solutions in the wireless and wireline optical networking domains. Would you like to be part of this journey and do something truly meaningful? Challenge yourself by working in Tejas’ fast-paced, autonomous learning environment and see your output and contributions become a part of live products worldwide. At Tejas, you will have the unique opportunity to work with cutting-edge technologies, alongside some of the industry’s brightest minds. From 5G to DWDM/ OTN, Switching and Routing, we work on technologies and solutions that create a connected society. Our solutions power over 500 networks across 75+ countries worldwide, and we’re constantly pushing boundaries to achieve more. If you thrive on taking ownership, have a passion for learning and enjoy challenging the status quo, we want to hear from you! Who we are: In the rapidly evolving landscape of telecommunications, the development and optimization of Mobile Packet Core (MPC) solutions are critical for ensuring seamless, high-performance connectivity. Our team specializes in the design and implementation of cutting-edge Mobile Packet Core systems that support both 4G and 5G networks. Our solutions are cloud-native, highly available, and tailored to meet the demands of telco-scale operations, ensuring that our clients can deliver unparalleled service quality and innovation. Our team is dedicated to pushing the boundaries of mobile network technology by developing Mobile Packet Core solutions that are not only advanced and reliable but also adaptable to the future of telecommunications. With a focus on cloud-native design, high availability, and scalability, we empower operators to deliver exceptional connectivity and services to their customers, now and into the future. Our mission is to revolutionize mobile network infrastructure by providing robust, scalable, and flexible Mobile Packet Core solutions that drive the future of telecommunications. We are committed to delivering cloud-native architectures that offer unmatched reliability, performance, and agility to support both current and next-generation mobile networks. What you work: As a Staff Engineer, you will be responsible for drive testing and sustenance various Network Functions and/or platform for our Mobile Packet Core (EPC and 5GC). You'll lead technical initiatives, mentor team members and collaborate closely with cross functional teams to drive innovation and ensure high-quality deliverables. You'll leverage your expertise to solve challenging problems and contribute to strategic engineering decisions. 4G EPC Testing: Develop, execute, and validate test plans for 4G EPC components including Serving Gateways (SGW), Packet Gateways (PGW), Mobility Management Entity (MME), and Home Subscriber Server (HSS). Ensure interoperability and functionality of these components in various network scenarios. 5G Core Testing: Design and implement test plans for 5G Core network functions, including Access and Mobility Management Function (AMF), Session Management Function (SMF), User Plane Function (UPF). Validate 5G NR (New Radio) integration and end-to-end functionality. Protocol Validation: Test and validate protocols used in 4G and 5G networks, such as GTP, SCTP, and HTTP/2. Analyze protocol exchanges to ensure correct implementation and performance. Performance Testing: Conduct performance tests for both 4G and 5G core networks. Measure metrics such as throughput, latency, and scalability. Identify performance bottlenecks and work with engineering teams to resolve them. Automation and Scripting: Develop and maintain automated test scripts and frameworks for both 4G and 5G core testing. Utilize scripting languages (e.g., Python, Bash) and automation tools to improve testing efficiency. Issue Resolution: Identify, document, and track defects and performance issues. Collaborate with development and operations teams to troubleshoot and resolve issues effectively. Documentation and Reporting: Create detailed test documentation, including test plans, cases, and reports. Provide clear and concise communication on test results and issues to stakeholders. Collaboration: Work closely with cross-functional teams including software developers, network architects, and product managers to ensure comprehensive testing and integration of core network components. Continuous Improvement: Stay abreast of the latest advancements in mobile core technologies (both 4G and 5G). Suggest and implement improvements to testing methodologies and tools. Mandatory skills: Deep knowledge of 4G EPC components and protocols (e.g., GTP, SCTP). Strong understanding of 5G Core functions and protocols (e.g., N1/N2/N3/N4 interfaces, HTTP/2). Experience with network simulators/emulators and traffic generation tools. Desired skills: Proficiency in scripting languages for automation (e.g., Python, Bash). Excellent analytical and problem-solving skills, with the ability to diagnose and resolve complex network issues. Strong verbal and written communication skills. Ability to effectively articulate technical concepts to various stakeholders. High level of attention to detail and a methodical approach to testing and quality assurance. Preferred Qualifications: Experience: 10 to 15 yrs from Telecommunication or Networking background Education: B.Tech/BE (CSE/ECE) or any other equivalent degree Diversity and Inclusion Statement: Tejas Networks is an equal opportunity employer. We celebrate diversity and are committed to creating all inclusive environment for all employees. We welcome applicants of all backgrounds regardless of race color, religion, gender, sexual orientation, age or veteran status. Our goal is to build a workforce that reflects the diverse communities we serve and to ensure every employee feels valued and respected.
Posted 1 week ago
0.0 - 1.0 years
0 - 0 Lacs
Hyderabad, Telangana
On-site
Job Opening for Child Psychologist Job Location - Gachibowli, Hyderabad, Telangana Contact - 9311809772 / kyadav@momsbelief.com Fresher or Experienced all can apply Job Highlights : Role & Responsibilities - To perform time to time parent counselling and grievance redressal. - Recognize clients that can benefit from ABA and counsel the parents for enrolment. - Plan and conduct ABA assessment either on your own or with support of a supervisor. - Take daily cold probe data for the client. - Make assessment reports and IEP for the client either on your own or with support of a supervisor. - Manage negative behaviour of the learner and maintain instructional control. Ensure no person or property is damaged by the learner. - Report any problem behaviours of special concern to supervisors and plan and run a behaviour intervention plan. - Collect data such as frequency, duration, latency to track progress of the child. - Manage materials required for therapy. Coordinate promptly with supervisors and team if and when new material is needed. - Coordinate with Special Educators, ST and OT from time to time so there's no overlap or clash in therapy goals. - Ensure skills when achieved by the learner are generalised and kept in maintenance. Contact supervisors to add new goals in the plan. - Manage group sessions if they are planned at the centre. Other Skills - Must have good understanding of psychology, especially reward and principle. - Is well versed and confident in using Google Sheets and Docs, even on the phone to ensure timely data correction. - Should be quick in their responses to ensure instructional control in the learners and keep negative behaviours in check. - Knows and is not ashamed of singing and dancing as part of therapy. - Is physically fit enough to engage in physical play with learners. - Is creative to make best use of available resources to use in therapy. - Is open and inviting to the learner yet strict to maintain instructional control. - Is sufficiently loud especially while praising the learner. Can instil over enthusiasm. - Can address regular queries from parents. Can differentiate which issues described by parents or school need immediate attention. - Knows how to ignore tantrums when required. Gives little to no reaction when being hit, spit on, or laughed at by the learner. - Is up to date with their knowledge in the field of ABA and is willing to learn more. Job Types: Full-time, Permanent, Fresher Pay: ₹20,000.00 - ₹25,000.00 per month Benefits: Health insurance Provident Fund Application Question(s): How soon can you join this job ? Education: Master's (Preferred) Experience: Child Psychologist : 1 year (Preferred) Language: Telugu (Preferred) Location: Hyderabad, Telangana (Preferred) Work Location: In person
Posted 1 week ago
3.0 years
0 Lacs
Kolkata, West Bengal, India
On-site
Who we are: Teesta Investment is a pioneering proprietary trading HFT startup trading and market-making global digital assets in India. We lead the market for various financial instruments and digital currencies and operate with exceptional expertise, knowledge and global reach. We are a firm that was founded on innovative, forward thinking and professional approaches to digital asset trading which reflects in our daily operations. We leverage our extensive knowledge of all relevant market trends & developments from both technology and trading perspectives to help markets achieve price efficiencies . Your role: THIS IS A FULL TIME POSITION ON-SITE BASED OUT OF OUR KOLKATA OFFICE. This is a core development position within our software development and engineering team where you will play a critical role in designing, developing, and optimizing the software infrastructure that powers our real-time trading strategies. You will need to be a highly skilled developer with a deep understanding of C++ and/or Rust, as well as similar proficiencies across a number of other coding platforms and languages. Prior experience within an existing HFT environment with a proven track record is a must.Your key responsibilities will include but are not limited to: Collaborating closely with teams of traders, researchers and other developers to conceptualize, design, and implement high-performance trading algorithms for financial markets. Developing and maintaining key low-latency trading systems by optimizing code for performance, latency reduction, and efficiency Implementing risk management and trade execution strategies to minimize risk exposure and maximize profitability. Monitoring and troubleshooting production systems, promptly identifying and resolving any issues to maintain uninterrupted trading up time. Tracking and onboarding the latest cutting-edge developments in trading technologies and financial markets (cryptocurrency and other asset classes) to facilitate a competitive advantage. Implementing and performing code reviews and knowledge sharing sessions to promote best practices and maintain code quality. Mentoring junior developers and interns by imparting technical guidance to the team. Our needs: A Bachelor's degree preferably in Computer Science, Engineering, or a related field with an advanced or Master’s degree preferred. 3+ years of experience as a software developer within a high-frequency trading (HFT) environment, with a strong focus on digital assets/cryptocurrency markets. Robust proficiency in C++, and/or Rust programming languages is essential. Possess in-depth knowledge of market microstructure, trading algorithms, low-latency system design, network protocols, and hardware optimization.Exposure with order routing, market data feeds, and exchange connectivity protocols and platforms. Extensive experience with code debugging and performance profiling tools. Strong familiarity with Linux-based development environments. Exceptional problem-solving skills and the ability to work effectively under pressure in a fast-paced trading environment. Excellent communication and collaboration skills. Prior experience in a cryptocurrency-focused HFT firm is a significant plus. Perks offered: Access to In house snack bar and drinks Reimbursement for meals Fitness club/Gym memberships Sponsorship for higher education / academic endeavours Relocation benefits Health Insurance for candidate and their dependents We’re looking for candidates who have a passion for pushing the boundaries of finance and technology and are keen to promote the cause of alternative assets and digital financial systems. In addition, you should be comfortable working in a fast growth environment, within a small agile team, with fast-evolving roles and responsibilities, variable workloads, tight deadlines, and a high degree of autonomy.
Posted 1 week ago
0.0 - 9.0 years
0 - 1 Lacs
Pune, Maharashtra
On-site
Good hands on with SREObservability concepts Good knowledge or exp in Monitoring tool like PrometheusGrafana ITRS AppDynamics Good Hands on DevOps tool like JenkinsTeamCity Ansible uDeploy Good exphands on in scripting like Python Shell Scripting Automation Responsible for availability latency performance efficiency change management monitoring emergency response and capacity planning Detest issue and handles failures to keep the system up and reliable Maintain reliability of infrastructure environments to run smoothly without any causes Gather and analyze metrices from operating system and different applications to assist in performance tuning and fault findings Partner with development team to improves service through rigor testing and release procedure Expert who can uses automation tools to monitor and observe software reliability in the production environment Responsible for leading and driving platformfirst initiatives to ensure the scalability reliability and performance of our technology platform Play a pivotal role in enhancing the availability reliability and performance of our critical systems and services Designed and developed a fully automated workflow using several scripts JavaScriptPowerShellBash and utilities Skills Mandatory Skills : Git,Grafana,Jenkins,AppDynamics,New Relic,Python,Shell Scripting,Splunk,Azure Monitor,SUMO LOGIC,SITE24X7,CYBERARK,CLOUD FLARE,Prometheus,Chaos Monkey,Chaos Testing,Observability,Reliability Patterns,CloudWatch,Threat Modeling,DATA Dog,Dynatrace Job Type: Full-time Pay: ₹70,000.00 - ₹100,000.00 per month Experience: Kafka: 9 years (Preferred) Location: Pune, Maharashtra (Preferred) Work Location: In person Application Deadline: 24/07/2025
Posted 1 week ago
8.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
We are looking for a Sr Data Engineer who can start immediately Mandatory skills: Apache Flink Java Kafka Looking for Chennai candidates who can join in 15 days. Client: Product based company ( Chennai) WFO Qualifications: • 8+ years of data engineering experience with large-scale systems (petabyte-level). • Expert proficiency in Java for data-intensive applications. • Hands-on experience with lakehouse architectures, stream processing (Flink), and event streaming (Kafka/Pulsar). • Strong SQL skills and familiarity with RDBMS/NoSQL databases. • Proven track record in optimizing query engines (e.g., Spark, Presto) and data pipelines. • Knowledge of data governance, security frameworks, and multi-tenant systems. • Experience with cloud platforms (AWS, GCP, Azure) and infrastructure-as-code (Terraform). Key Responsibilities: • Architect & Build Scalable Systems: Design and implement petabyte-scale lakehouse architectures (Apache Iceberg, Delta Lake) to unify data lakes and warehouses. • Real-Time Data Engineering: Develop and optimize streaming pipelines using Kafka, Pulsar, and Flink to process structured/unstructured data with low latency. • High-Performance Applications: Leverage Java to build scalable, high-throughput data applications and services. • Modern Data Infrastructure: Leverage modern data warehouses and query engines (Trino, Spark) for sub-second operation and analytics on real-time data. • Database Expertise: Work with RDBMS (PostgreSQL, MySQL, SQL Server) and NoSQL (Cassandra, MongoDB) systems to manage diverse data workloads. • Data Governance: Ensure data integrity, security, and compliance across multi-tenant systems. • Cost & Performance Optimization: Manage production infrastructure for reliability, scalability, and cost efficiency. • Innovation: Stay ahead of trends in the data ecosystem (e.g., Open Table Formats, stream processing) to drive technical excellence. • API Development (Optional): Build and maintain Web APIs (REST/GraphQL) to expose data services internally and externally.
Posted 1 week ago
6.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Job Summary: We are looking for a skilled and proactive Machine Learning Applications Engineer with at least 6 years of industry experience and a strong foundation in DevOps practices . The ideal candidate will be responsible for building, deploying, and optimizing ML models in production environments, as well as ensuring robust infrastructure support for AI/ML pipelines. This role sits at the intersection of data science, machine learning engineering, and DevOps. Key Responsibilities: Design, develop, and deploy scalable ML models and applications into production environments. Build and manage end-to-end ML pipelines including data ingestion, model training, evaluation, versioning, deployment, and monitoring. Implement CI/CD pipelines tailored for ML workflows. Collaborate with data scientists, software engineers, and cloud architects to operationalize machine learning solutions. Ensure high availability, reliability, and performance of ML services in production. Monitor and optimize model performance post-deployment. Automate infrastructure provisioning using Infrastructure-as-Code (IaC) tools. Maintain strong documentation of ML systems, experiments, and deployment configurations. Required Skills & Qualifications: Bachelor’s or Master’s degree in Computer Science, Data Engineering, or related field. Minimum 6 years of professional experience in software engineering or ML engineering roles. Strong hands-on experience with machine learning frameworks like TensorFlow, PyTorch, or Scikit-learn. Proficiency in Python (and optionally Java, Scala, or Go). Solid experience with DevOps tools such as Docker, Kubernetes, Jenkins, GitLab CI/CD. Experience with cloud platforms like AWS, Azure, or GCP , particularly with AI/ML services and infrastructure. Knowledge of monitoring and logging tools (e.g., Prometheus, Grafana, ELK, CloudWatch). Strong understanding of ML Ops practices including model versioning, experiment tracking, and reproducibility. Preferred Qualifications: Experience with Kubeflow , MLflow , SageMaker , or Vertex AI . Familiarity with data engineering tools such as Apache Airflow, Spark, or Kafka. Understanding of data security and compliance best practices in ML deployments. Prior experience in deploying large-scale, low-latency ML applications in production.
Posted 1 week ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39581 Jobs | Dublin
Wipro
19070 Jobs | Bengaluru
Accenture in India
14409 Jobs | Dublin 2
EY
14248 Jobs | London
Uplers
10536 Jobs | Ahmedabad
Amazon
10262 Jobs | Seattle,WA
IBM
9120 Jobs | Armonk
Oracle
8925 Jobs | Redwood City
Capgemini
7500 Jobs | Paris,France
Virtusa
7132 Jobs | Southborough