Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
6.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
JOB DETAILS: Job Title: Performance Engineer Locations: Chennai, Bangalore, Pune, Hyderabad, Kolkata Experience Range: 6+ Years Position: Hybrid Required Skills: LoadRunner, JMeter, AWS/GCP/Azure, Dynatrace/Grafana and APP Dynamics and Oracle/SQL Job Description: 1. Total of 6+ years of experience with a minimum 3+ years of proven hands-on software testing experience. 2. Strong experience in performance testing of SAP applications. Candidate should have an experience working on Brown or Green field S/4 HANA implementations 3. Strong knowledge of one or more programming languages (e.g., Python, Perl, ABAP, Java, C++). 4. Experience in performance testing tools · LoadRunner Enterprise, Apache JMeter, SoapUI, Postman. 5. Experience in data analysis and visualization, end-to-end performance, and root cause analysis. 6. Experience in service-oriented architecture, and RESTful APIs. 7. Strong knowledge of performance-tuning SQL queries. 8. Extensive hands-on experience with JVM tuning techniques. 9. Experience with application performance monitoring tools like Dynatrace, and AppDynamics. Efficiently work with various profiling tools to identify performance bottlenecks. 10. Experience in Cloud-based platforms like AWS and GCP. 11. Expertise in performance assessment and optimization of complex applications built on the SAP BTP platform. Strong understanding of BTP components like API management (APIM), SAP Integration Suit, Conversational AI (CAI), Data Intelligence (SAP DI), and AWS storage services. Skilled in BTP technologies like Smart Data Integration(SDI). 12. Strong experience in performance testing and analysis of SCP iOS SDK mobile apps. 13. Proficient in application & DB performance monitoring in SAP BTP using Kibana, HANA DB cockpit and SAP BTP cockpit. 14. Understanding of basic Machine Learning algorithms & principles. 15. Strong understanding of relational database systems as well as in-memory databases. 16. Understanding of Algorithms and O-Notation. Code management using Git.
Posted 3 days ago
1.0 years
1 - 2 Lacs
Delhi
On-site
Key Responsibilities Set up and manage applications, databases, web servers (especially Nginx), and proxy servers in a Unix-based environment. Maintain and monitor a large number of servers with a goal of 0 downtime. Identify and respond to system vulnerabilities and security threats in real time. Perform regular server upgrades, patch management, and health checks. Implement and manage monitoring, logging, and alerting systems for infrastructure and applications. Ensure system compliance and security policies are enforced, with exposure to SOC 2 or similar compliance standards being a plus. Troubleshoot and resolve infrastructure issues promptly — even outside regular working hours when needed. Automate server and service provisioning where possible using shell scripts or tools like Ansible, Terraform, etc. Requirements Strong command over Unix/Linux systems and system-level troubleshooting. Solid experience in setting up and managing Nginx, MySQL/PostgreSQL, Redis, and other common services. Hands-on experience in server hardening, patching, and vulnerability management. Proficiency in scripting (Bash, Shell, or Python) for automation tasks. Experience with monitoring tools like Zabbix, Prometheus, Grafana, or similar. Familiarity with cloud platforms (AWS, DigitalOcean, GCP, etc.) and managing cloud infrastructure. Experience working with compliance tools or frameworks like SOC 2, ISO 27001, etc. (preferred but not mandatory). Strong troubleshooting skills and the ability to fix critical issues on the fly. Flexibility in work hours for emergency incident handling. Nice to Have Experience with container technologies (Docker, Kubernetes). Exposure to CI/CD pipelines and version control systems like Git. Knowledge of firewall and network configurations. Job Types: Full-time, Permanent Pay: ₹15,000.00 - ₹18,000.00 per month Benefits: Health insurance Provident Fund Schedule: Day shift Monday to Friday Education: Bachelor's (Required) Experience: UNIX: 1 year (Required) Linux: 1 year (Required) Work Location: In person Speak with the employer +91 9971014332
Posted 3 days ago
1.0 years
3 - 7 Lacs
Delhi
Remote
Job Summary Core software development activities. You will get chance to work on full life cycle of DevOps. We believing in overall growth of every individual along with company. We strive to give the best exposure to every team member. Required Experience and Qualifications Good technical knowledge in any of these (AWS, GCP, Azure) Job Types: Full-time, Permanent Pay: ₹30,000.00 - ₹60,000.00 per month Benefits: Commuter assistance Flexible schedule Leave encashment Work from home Schedule: Morning shift Supplemental Pay: Performance bonus Yearly bonus Application Question(s): Total experience in GCP Experience: GCP DevOps: 1 year (Required) Expected Start Date: 01/07/2025
Posted 3 days ago
7.0 - 12.0 years
6 - 14 Lacs
Chennai
Work from Office
7+ years of professional DevOps / production operations experience. 5+ years of Cloud Native application delivery experience. Strong Hands-on experience with CI/CD tools like GitLab, GitHub, Jenkins, etc., Intimate knowledge of public cloud infrastructures (GCP - Preferred, AWS, Azure). Hands-on experience working on core Cloud services Kubernetes, Compute, Storage, Network, Virtualization, Identity and Access Management (IAM). Expert level in Current technology Stack Helm, K8S, Istio, GCP, AWS, GKE, EKS, Terraform, SRE/DevOps practices or equivalent. Strong proven experience in Infrastructure as Code (IaC), to be responsible for building robust platforms using automation. Experience building automation and deploying high quality software with test frameworks and CI/CD, Progressive Delivery. Strong development experience in one or more programming languages - Python, Terraform, Golang, Java, etc. Advanced knowledge of Linux based systems and runtimes. DevOps mindset and familiarity with the concept of Site Reliability Engineering – inherent sense of ownership through the development and deployment lifecycle. You understand what it takes to run mission critical software in production. Ability to prioritize tasks, work independently and work collaboratively in an agile environment.
Posted 3 days ago
5.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
We have open position of GCP Devops -- PAN India – 5+Exp – Need Immediate Joiner only If interested please share resume at archana@radiansys.com Job Description Strong hands on experience in GCP Relevant experience in Terraforms Should have experience in Kubernetes and Devops. Total experience – 5-8 years Location - Flexible. ͏ Design, implement, and maintain CI/CD pipelines using Jenkins, Codefresh, or GitHub Actions. Collaborate with cross-functional teams to identify the areas for process improvement and implement changes using Agile Methodologies. Troubleshoot complex issues related to infrastructure deployment and management on GCP and AWS Platforms. Configure and manage networking components VPCs, subnets, firewalls, load balancers, DNS, and VPNs. Manage IAM roles and policies in AWS and GCP to ensure secure and appropriate access to cloud resources. Handle user, group, and service account permissions, including provisioning, auditing, and troubleshooting access-related issues. Develop and maintain Terraform-based infrastructure and reusable modules in Cloud Environment. Develop and maintain automation scripts using Bash, Python or other relevant scirpint languages. Create and maintain document Standard operating procedures (SOP's) and support guides. Thanks & Regards, Archana Sharma I IT Recruiter I Radiansys INC Email: Archana@radiansys.com
Posted 3 days ago
7.0 - 12.0 years
5 - 15 Lacs
Bengaluru
Work from Office
Job Title: Cloud Platform Engineer Location: Bengaluru, India Experience Level: 7 + years Employment Type: Full-time About the Role We are seeking a highly skilled and automation-driven Cloud Platform Engineer to design, implement, and manage scalable cloud landing zones and infrastructure solutions across Azure and GCP. You will play a key role in building reusable modules, enforcing governance through policy-as-code, and enabling CI/CD pipelines to support diverse workloads including software applications, data platforms, and AI solutions. Key Responsibilities Design and implement Cloud Landing Zones using Azure CAF and GCP Project Factory Develop reusable Terraform and Bicep modules for infrastructure provisioning Build and maintain CI/CD pipelines using Azure DevOps and GitHub Actions Enforce guardrails and governance using policy-as-code frameworks Collaborate with application and data teams to support workload onboarding Automate infrastructure tasks using Python and Bash scripting Ensure security, scalability, and compliance across cloud environments Perform code reviews, maintain documentation, and contribute to best practices Required Skills Strong hands-on experience with Terraform (must) and Bicep Proficiency in CI/CD tools : Azure DevOps, GitHub Actions Experience with Azure Landing Zones and GCP Project Factory Solid scripting skills in Python and Bash Familiarity with policy-as-code tools (e.g., Azure Policy, Sentinel, OPA) Understanding of cloud architecture, networking, and security principles Automation mindset with strong problem-solving and critical thinking skills Preferred Qualifications Certifications: Azure Solutions Architect, Google Cloud Architect, Terraform Associate Experience with container orchestration (e.g., Kubernetes) Exposure to multi-cloud environments and hybrid cloud setups Required Skills
Posted 3 days ago
2.0 years
2 - 7 Lacs
Rājkot
On-site
About Bloomfield Innovations: Bloomfield Innovations is a leading provider of integrated Education ERP solutions designed to streamline and enhance educational institutions' administrative and academic processes. We deliver intelligent systems that simplify administration, improve academic outcomes, and enhance educational excellence. At the heart of our vision is the commitment to harness cutting-edge technologies—including Artificial Intelligence—to redefine the future of education. Role Overview: We are looking for an innovative and driven AI Developer to join our development team. You will be responsible for designing and deploying AI-powered features within our Education ERP platform to optimize academic processes, personalize learning, and automate complex administrative tasks. As an AI Developer at Bloomfield Innovations, you will work closely with product managers and software support engineers. This role combines cutting-edge AI/ML development with real-world impact in the education domain. Key Responsibilities: ● Design, develop, and integrate AI/ML models into the Bloomfield ERP platform. ● Implement smart features for product development. ● Eg : Predictive analytics for student performance and dropout risk. Intelligent scheduling and resource planning. AI-powered chatbots for student/parent queries. Personalized learning recommendations. Automated grading and feedback systems. ● Use NLP to analyze academic content, feedback, and communications. ● Work with large datasets from colleges, and universities to derive actionable insights. ● Collaborate with UI/UX teams to bring AI solutions to life in the product interface. ● Ensure that all AI applications adhere to data privacy, security, and ethical standards in education. ● Collaborate with cross-functional teams to integrate AI solutions into the ERP platform seamlessly. ● Evaluate and improve AI models for accuracy, scalability, and real-world performance. ● Stay updated with the latest advancements in AI and EdTech to bring innovative ideas into product development. Required Skills & Qualifications: ● Bachelor’s or Master’s degree in Computer Science, Artificial Intelligence, Data Science, or a related field. ● Proven experience in developing and deploying machine learning or AI models in real-world applications. ● Proficiency in Python and ML frameworks (e.g., TensorFlow, PyTorch, Scikit-learn). ● Strong knowledge of algorithms, statistics, and data structures. ● Experience working with NLP libraries (e.g., spaCy, NLTK, Hugging Face Transformers). ● Ability to work with structured and unstructured data. ● Understanding of RESTful APIs, cloud platforms (AWS/GCP/Azure), and MLOps practices. ● Strong problem-solving skills and the ability to work collaboratively in a product-driven environment. Job Type: Full-time Pay: ₹20,000.00 - ₹60,000.00 per month Location Type: In-person Schedule: Day shift Ability to commute/relocate: Rajkot, Rajkot, Gujarat: Reliably commute or planning to relocate before starting work (Preferred) Experience: Software development: 2 years (Preferred) Work Location: In person Speak with the employer +91 9099939554
Posted 3 days ago
5.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Senior Pre Sales Engineer- AWS Location: Pune Job Type: Full-time Department: Pre Sales About Datamotive Datamotive is a leading provider of cloud portability solutions, helping customers move and manage workloads across clouds with speed and confidence. We’re redefining cloud portability—and we need passionate, skilled, and driven individuals to join us on this journey. We’re looking for a Senior Pre Sales Engineer who’s eager to work hands-on with cutting-edge cloud technologies, directly impacting customer success and driving solution excellence. What You’ll Do Act as a trusted technical advisor to enterprise, mid-market, and SMB customers—evangelizing and positioning Datamotive’s unique cloud portability solutions. Lead and participate in deep technical discussions and whiteboarding sessions during pre-sales engagements. Architect and design fully integrated disaster recovery (DR) and migration solutions tailored to customer needs. Deliver compelling technical presentations and demos to CxO-level and technical audiences. Collaborate closely with the sales team, helping qualify opportunities and define customer requirements. Document detailed technical specifications, proposals, and internal handoff documentation. Provide field insights and ongoing feedback to Product Management to influence product strategy and roadmap. Identify and share common technical challenges that could be addressed through automation or product enhancements. Champion customer success from the first engagement through to solution handover. What You Bring 5+ years of customer-facing pre-sales engineering or consulting experience, preferably in cloud-native, DRaaS, or MSP environments. Proven hands-on expertise in: - Cloud platforms: Experience with at least two public clouds (e.g., AWS, Azure, GCP). - Cloud management, disaster recovery, data backup, and enterprise cloud migration. - Scripting using Go (Golang), Python, and Shell. - Infrastructure as Code (IaC) with Terraform (mandatory). Strong solutioning mindset—able to design innovative, business-aligned solutions that solve real problems. Comfortable in fast-paced environments—juggling multiple demos, discovery calls, and follow-ups daily. Solid understanding of enterprise architectures and multi-tier system designs. Confidence in working with C-level stakeholders and translating business needs into technical solutions. Entrepreneurial, proactive, and resourceful—someone who gets things done. Excellent communication, collaboration, and presentation skills. Why Join Us? Work with cutting-edge cloud tech in an agile, fast-moving environment. Take ownership of your role and make a real impact across customers and product. Be part of a collaborative, no-ego team that values mentorship and learning. Enjoy ample opportunities for growth as we scale our presence and product. Ready to Apply? If you’re excited about solving cloud challenges and helping customers succeed, we’d love to talk. Send your resume to: loveena@datamotive.io
Posted 3 days ago
4.0 - 12.0 years
7 - 8 Lacs
Ahmedabad
On-site
Urgently looking for experienced Clinical Research Associates (CRAs) to join their clinical research team. The ideal candidate will have hands-on experience in clinical trial monitoring, site coordination, and regulatory compliance, ensuring smooth execution of clinical trials as per GCP guidelines. Job Title: Clinical Research Associate (CRA) Location: Gota, Ahmedabad Positions Available: 2 Experience Required: 4 to 12 Years Salary Range: Up to ₹8 LPA (Based on experience and skills) Roles & Responsibilities: Perform the duty of Clinical research associate Perform on-site in-process and retrospective monitoring visit of clinical phase of the clinical trials, PK and BA/BE studies. Site feasibility, qualification, initiation, monitoring and close out meetings at different CROs for clinical trials, PK and BA/BE studies. Overseeing the progress of a clinical trial and BA-BE studies, and of ensuring that it is conducted, recorded, and reported in accordance with the protocol, Standard Operating Procedures (SOPs),Good Clinical Practice (GCP), and the applicable regulatory requirement(s). Checking documents of CRO/Site internal System audit at specified interval and related documents and CAPA. Preferred Skills: Experience with CROs or pharmaceutical companies. Regards HR Team Job Types: Full-time, Permanent Pay: ₹66,000.00 - ₹70,000.00 per month Benefits: Cell phone reimbursement Commuter assistance Health insurance Internet reimbursement Leave encashment Life insurance Provident Fund Schedule: Day shift Supplemental Pay: Overtime pay Performance bonus Yearly bonus Work Location: In person
Posted 3 days ago
0 years
4 - 9 Lacs
Noida
On-site
Genpact (NYSE: G) is a global professional services and solutions firm delivering outcomes that shape the future. Our 125,000+ people across 30+ countries are driven by our innate curiosity, entrepreneurial agility, and desire to create lasting value for clients. Powered by our purpose – the relentless pursuit of a world that works better for people – we serve and transform leading enterprises, including the Fortune Global 500, with our deep business and industry knowledge, digital operations services, and expertise in data, technology, and AI. Inviting applications for the role of Lead Consultant – Java & GCP Developer In this role, you will be responsible for Developing Microsoft Access Databases, including tables, queries, forms and reports, using standard IT processes, with data normalization and referential integrity. Responsibilities Experience with Spring Boot Must have GCP Experience Experience with Microservices development Extensive Experience working with JAVA API with Oracle is critical. Extensive experience in Java 11 SE Experience with unit testing frameworks Junit or Mockito Experience with Maven/Gradle Professional, precise communication skills Experience in API designing, troubleshooting, and tuning for performance Professional, precise communication skills Experience designing, troubleshooting, API Java services and microservices Qualifications we seek in you! Minimum Qualifications BE /B.Tech/M.Tech/MCA Preferred qualifications Experience with Oracle 11g or 12c pl/sql is preferred Experience in health care or pharmacy related industries is preferred. Familiarity with Toad and/or SQL Developer tools Experience working with Angular, Spring Boot frame as well Experience with Kubernetes, Azure cloud Genpact is an Equal Opportunity Employer and considers applicants for all positions without regard to race, color, religion or belief, sex, age, national origin, citizenship status, marital status, military/veteran status, genetic information, sexual orientation, gender identity, physical or mental disability or any other characteristic protected by applicable laws. Genpact is committed to creating a dynamic work environment that values respect and integrity, customer focus, and innovation. For more information, visit www.genpact.com . Follow us on Twitter, Facebook, LinkedIn, and YouTube. Furthermore, please do note that Genpact does not charge fees to process job applications and applicants are not required to pay to participate in our hiring process in any other way. Examples of such scams include purchasing a 'starter kit,' paying to apply, or purchasing equipment or training. Job Lead Consultant Primary Location India-Noida Schedule Full-time Education Level Bachelor's / Graduation / Equivalent Job Posting Jun 25, 2025, 12:03:20 PM Unposting Date Ongoing Master Skills List Consulting Job Category Full Time
Posted 3 days ago
10.0 - 15.0 years
3 - 7 Lacs
Noida
On-site
Location: Noida, India Thales people architect identity management and data protection solutions at the heart of digital security. Business and governments rely on us to bring trust to the billons of digital interactions they have with people. Our technologies and services help banks exchange funds, people cross borders, energy become smarter and much more. More than 30,000 organizations already rely on us to verify the identities of people and things, grant access to digital services, analyze vast quantities of information and encrypt data to make the connected world more secure. Present in India since 1953, Thales is headquartered in Noida, Uttar Pradesh, and has operational offices and sites spread across Bengaluru, Delhi, Gurugram, Hyderabad, Mumbai, Pune among others. Over 1800 employees are working with Thales and its joint ventures in India. Since the beginning, Thales has been playing an essential role in India’s growth story by sharing its technologies and expertise in Defence, Transport, Aerospace and Digital Identity and Security markets. aJob Profile: Part of SRE team ( Site reliability engineering) Deployment,proactive Deployment, proactive improvement in Monitoring of complex Kubernetes/micro service architecture based applications on any Cloud Provider (Azure/ AWS /GCP) Required Skills: Ability to demonstrate solid skills in Azure/AWS/GCP, Kubernetes , Unix/Linux Platform. Ability to demonstrate knowledge about Cluster, Cloud/VM based solution deployment, and management, including knowledge about networking, servers and storages. Experience in DevOps with Kubernetes. Strong knowledge of CI/CD tools (Jenkins, Bamboo etc.) Experience in cloud platforms and infrastructure automation. Experience on JAVA/Python/Go language to support workloads implemented in these language. Experience on Tomcat and SpringBoot based JAVA workloads Experience on atleast one of the observability platform like Prometheus, Splunk, DataDog, Honeycomb etc. Expertise on Python/Go or similar scripting/languageto develop tools required for SRE job. Experience either in conducting or participating Root Cause Analysis sessions. Must have completed minimum one project end-to-end in a technical DevOps role, preferably in a global organization. Practical understanding of Ansible, Docker, and implementation of the solutions based upon these tools is preferred. Ability to handle escalations Proven ability to learn and apply new skills and processes quickly and train others in team. Demonstrated experience as individual contributor with customer focus and service orientation with solid leadership and coaching skills. Ability to communicate courteously and effectively with customers, third-party vendors and partners. Proficiency with Customer Relationship Management (CRM) software such as JIRA and Confluence. Excellent written and verbal communication skills in English. Desired Skills: Exposure on DataDog, Kafka, Keycloak or similar solutions. Good Exposure on Terraform/Terragrunt. OpenTelemetry Experience Required: 10 - 15 years of total experience Total, mainly on Devops Role. Note: This is an hands-on development and design role. At Thales we provide CAREERS and not only jobs. With Thales employing 80,000 employees in 68 countries our mobility policy enables thousands of employees each year to develop their careers at home and abroad, in their existing areas of expertise or by branching out into new fields. Together we believe that embracing flexibility is a smarter way of working. Great journeys start here, apply now!
Posted 3 days ago
0 years
8 Lacs
Noida
On-site
Country/Region: IN Requisition ID: 26752 Work Model: Position Type: Salary Range: Location: INDIA - NOIDA- BIRLASOFT OFFICE Title: Sr Lead Business Consultant Description: Area(s) of responsibility Title - GCP Cloud Devops Ability to use a wide variety of open source technologies and cloud services, with required experience in Google Cloud Platform Proficiency in Google Cloud Platform environment, operations, and automation Proficiency in building & administering CI/CD pipeline Experience with automation/configuration management tools Ability to handle code deployments in all environments Experience in building, maintaining, and monitoring configuration standards Experience with containerization management Exposure to DevOps in AWS platform is good to have Experience with Kubernetes
Posted 3 days ago
8.0 years
5 - 8 Lacs
Noida
On-site
At Cotality, we are driven by a single mission—to make the property industry faster, smarter, and more people-centric. Cotality is the trusted source for property intelligence, with unmatched precision, depth, breadth, and insights across the entire ecosystem. Our talented team of 5,000 employees globally uses our network, scale, connectivity and technology to drive the largest asset class in the world. Join us as we work toward our vision of fueling a thriving global property ecosystem and a more resilient society. Cotality is committed to cultivating a diverse and inclusive work culture that inspires innovation and bold thinking; it's a place where you can collaborate, feel valued, develop skills and directly impact the real estate economy. We know our people are our greatest asset. At Cotality, you can be yourself, lift people up and make an impact. By putting clients first and continuously innovating, we're working together to set the pace for unlocking new possibilities that better serve the property industry. Job Description: In India, we operate as Next Gear India Private Limited, a fully-owned subsidiary of Cotality with offices in Kolkata, West Bengal, and Noida, Uttar Pradesh. Next Gear India Private Limited plays a vital role in Cotality's Product Development capabilities, focusing on creating and delivering innovative solutions for the Property & Casualty (P&C) Insurance and Property Restoration industries. While Next Gear India Private Limited operates under its own registered name in India, we are seamlessly integrated into the Cotality family, sharing the same commitment to innovation, quality, and client success. When you join Next Gear India Private Limited, you become part of the global Cotality team. Together, we shape the future of property insights and analytics, contributing to a smarter and more resilient property ecosystem through cutting-edge technology and insights. Create and maintain optimal data pipeline architecture Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc. Build the infrastructure required for extraction, transformation, and loading of data from a wide variety of data sources using SQL and GCP ‘big data’ technologies. Build analytics tools that utilize the data pipeline to provide actionable insights into customer acquisition, operational efficiency and other key business performance metrics. Work with data and analytics experts to strive for greater functionality in our data systems. Assemble large, complex data sets that meet functional / non-functional business requirements. Evaluate feasibility and make recommendations, considering things such as customer requirements, time limitations, system limitations. Serve as a mentor to junior staff by conducting technical training sessions and reviewing project outputs Build documentation repository for knowledge transfer and developing expertise in multiple areas. Provide operational support on complex/escalated issues to diagnose and resolve incidents in production data pipelines. Job Qualifications: BS Degree or equivalent work experience in a software engineering discipline Typically has 8-10 years’ experience in an applicable software development environment Advanced working SQL knowledge and experience working with relational databases, query authoring (SQL) as well as working familiarity with a variety of databases (SQL, Postgres) Experience with diverse coding, profiling, and visualization approaches including authoring SQL queries, BigQuery, Python, Google Cloud or equivalent Experience performing root cause analysis on internal and external data and processes to answer specific business questions and identify opportunities for improvement. Strong analytic skills related to working with unstructured datasets. Hands on experience with Cloud Platforms (AWS, GCP, or Azure) Experience in designing and implementing large-scale event-driven architectures Understanding of data warehousing and data modeling techniques Understanding of Big Data, Cloud, Machine Learning approaches and concepts (preferred) Experience working as a member of a distributed team. Ability to organize and coordinate with stakeholders across multiple functions and geographic locations. Ability to develop and write technical specifications Coaching and teaching skills to mentor less experienced team members Excellent analytical and problem management skills Good interpersonal skills and positive attitude Experience with the following tools and technologies: Elastic Search, Kafka Google Cloud Airflow (preferred) Python, Java, C++ Big Query Thrive with Cotality At Cotality, we’re committed to supporting your whole self- at work and beyond. Our India benefits package is thoughtfully designed to promote your well-being, financial security, and professional growth. From comprehensive health coverage to flexible leave, retirement planning, and mental health support, we help you thrive every step of the way. Highlights include: Health & Wellness: Company-paid Mediclaim Insurance, routine dental and vision care (including LASIK and cataract), annual health check-ups, and maternity benefits. Mental Health: Access to 12 free sessions with top therapists and coaches for you and your dependents via Lyra. Leave & Time Off: 11 paid holidays (state-specific), 10 well-being half days, paid sick, maternity, paternity, caregiving, bereavement, and volunteer time off. Family Support: Coverage available for spouse, children, and parents or in-laws; includes maternity and parental leave. Financial Benefits: ₹10,400 annual well-being account ₹15,000 medical reimbursement allowance ₹19,200 conveyance allowance House Rent Allowance with tax benefits Insurance & Protection: Group Term Life and Personal Accident Insurance at 5x annual salary (company-paid) Retirement & Savings: Provident Fund with employer matching Optional National Pension Scheme (NPS) contributions (pre-tax) Extras: Performance bonuses, recognition rewards, and exclusive employee discounts. Cotality's Diversity Commitment: Cotality is fully committed to employing a diverse workforce and creating an inclusive work environment that embraces everyone’s unique contributions, experiences and values. We offer an empowered work environment that encourages creativity, initiative and professional growth and provides a competitive salary and benefits package. We are better together when we support and recognize our differences. Equal Opportunity Employer Statement: Cotality is an Equal Opportunity employer committed to attracting and retaining the best-qualified people available, without regard to race, ancestry, place of origin, colour, ethnic origin, citizenship, creed, sex, sexual orientation, record of offences, age, marital status, family status or disability. Cotality maintains a Drug-Free Workplace. Please apply on our website for consideration. Privacy Policy Global Applicant Privacy Policy By providing your telephone number, you agree to receive automated (SMS) text messages at that number from Cotality regarding all matters related to your application and, if you are hired, your employment and company business. Message & data rates may apply. You can opt out at any time by responding STOP or UNSUBSCRIBING and will automatically be opted out company-wide. Connect with us on social media! Click on the quicklinks below to find out more about our company and associates
Posted 3 days ago
2.0 years
7 - 9 Lacs
Noida
On-site
Overview about Ripik.AI: Ripik.ai is a fast-growing industrial AI SAAS start-up founded by IIT D/ BITS alumni and with extensive experience in McKinsey, IBM, Google and others. It is backed by marquee VC funds like Accel, Venture Highway and 25+ illustrious angels including 14 unicorn founders. Ripik.ai builds patented full stack software for automation of decision making on the factory floor. Today, they are deployed at more than 15 of the largest and most prestigious enterprises in India including the market leaders in steel, aluminium, cement, pharma, paints, consumer goods and others. It is one of India’s very few AI product start-ups to be a partner to GCP, Azure and AWS. We are also the AI partner of choice for CII, ICC and NASSCOM. Roles & Responsibilities We are seeking a highly motivated Automation and Machine Vision Engineer to join our Industrial AI and Automation team. The ideal candidate will have hands-on experience in developing and deploying computer vision solutions for manufacturing, quality control, and robotic inspection. You will be responsible for designing AI-based vision systems using cameras, deep learning models, and edge computing platforms. Develop and deploy machine vision applications using industrial cameras (e.g., Basler, FLIR). Design AI-based inspection systems for detecting defects such as scratches, dents, mislabels, and missing components. Build and train deep learning models using CNN, YOLO, RCNN for object detection, OCR, classification, and segmentation. Integrate vision systems into automation workflows, PLCs, or robotic arms using standard protocols. Work with image acquisition pipelines, camera calibration, lighting setup, and industrial communication standards. Collaborate with software, automation, and mechanical teams to build end-to-end inspection systems. Optimize model performance on edge devices (e.g., NVIDIA Jetson, Raspberry Pi, etc.). Perform on-site testing, calibration, and commissioning at industrial client locations. Required Skills, Competencies & Experience: B.E./B.Tech in Instrumentation, Electronics, Mechatronics, Computer Science, or related field. 2–5 years of relevant experience in machine vision, automation, or AI-based quality control systems. Strong problem-solving skills and ability to work independently in R&D or client-facing roles. Preferred Qualification: Strong knowledge of OpenCV, Python, TensorFlow/PyTorch, and image processing. Experience with YOLO, Faster-RCNN, SSD or other real-time object detection models. Hands-on with industrial camera systems (Basler, Hikvision, Cognex, etc.). Familiarity with edge AI platforms like NVIDIA Jetson Nano/Xavier. Understanding industrial automation systems, SCADA/PLC integration. Experience with machine vision software (e.g., Halcon, LabVIEW Vision, MVTec) is preferred. Familiarity with SCADA, Historian systems, or MES integration. Exposure to virtualization, server-client architecture, or redundant systems. Knowledge of video analytics, CCTV integration, and industrial automation security. Physical Requirements: Ability to travel to project sites and conduct field inspections. Comfortable working in hazardous industrial environments (oil refineries, chemical plants, etc.) What can you expect? Ability to shape the future of manufacturing by leveraging best-in-class AI and software; we are a unique organization with niche skill set that you would also develop while working with us World class work culture, coaching and development Mentoring from highly experienced leadership from world class companies. International Exposure Work Location – Noida (Work from Office) Job Type: Full-time Pay: ₹700,000.00 - ₹900,000.00 per year Benefits: Food provided Health insurance Provident Fund Schedule: Day shift Work Location: In person
Posted 3 days ago
3.0 - 6.0 years
6 - 14 Lacs
Chennai
Work from Office
5+ years of professional DevOps / production operations experience. 3+ years of Cloud Native application delivery experience. Hands-on experience with Kubernetes, CI/CD tools like GitLab, TeamCity. Intimate knowledge of public cloud infrastructures (GCP - Preferred, AWS, Azure). Hands-on experience working on core Cloud services compute, storage, network, virtualization, Identity and Access Management (IAM). Experience in Infrastructure as Code (IaC), to be responsible for building robust platforms using automation. Knowledge of Linux based systems and runtimes. Development experience in one or more programming languages - Python, Terraform, Java, etc. Experience level in Current technology Stack Helm, K8S, Istio, GCP, AWS, GKE, EKS, Terraform, SRE/DevOps practices or equivalent. Experience working with at least 1 or more Cloud Provider automation tools.
Posted 3 days ago
6.0 years
6 - 10 Lacs
Noida
On-site
Optum is a global organization that delivers care, aided by technology to help millions of people live healthier lives. The work you do with our team will directly improve health outcomes by connecting people with the care, pharmacy benefits, data and resources they need to feel their best. Here, you will find a culture guided by inclusion, talented peers, comprehensive benefits and career development opportunities. Come make an impact on the communities we serve as you help us advance health optimization on a global scale. Join us to start Caring. Connecting. Growing together. Primary Responsibilities: Design and develop software applications and application components in an agile environment using the E2E Java framework on Azure or GCP cloud platform Provides explanations and interpretations within area of expertise Create high-quality software by conducting peer design or code reviews and developing automated test cases Design patterns and optimize the performance of services Act as a hands-on engineer, leading by example and working independently Build production-ready features and capabilities adhering to the best engineering standards Share knowledge internally with peers and leaders, acting as a resource for teams and business units within areas of expertise Collaborate with Principal Engineers or Technical Architects to develop prototypes into production-ready features Leverage the latest technologies to solve complex problems facing the health care industry Guide and mentor junior engineers within the organization Writing SQL queries Understand the ins and outs of our product and identify gaps Analyzes data to identify trends in product quality or defects with the goal of mitigating and preventing recurrence and future defects Works with less structured, more complex issues Serves as a resource to others Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so Required Qualifications: Graduate degree or equivalent experience 6+ years of software development experience, with at least 5+ years focused on Java or J2EE software development on Azure/GCP cloud platforms, including Java, Spring framework, multi-threading, caching techniques, software engineering best practices, and performance optimization of services using memory management, garbage collection, CI or CD systems such as Jenkins, and code management tools like GitHub. Java full stack (API backend heavy) 6+ years of professional software development experience on the Java platform 3+ years of experience in API and Integrations: Experience designing and developing RESTful APIs and event-driven architectures using Kafka or similar technologies Experience designing and developing data-driven applications using RDMBS (Oracle, PL or SQL) and/or NoSQL datastores At UnitedHealth Group, our mission is to help people live healthier lives and make the health system work better for everyone. We believe everyone–of every race, gender, sexuality, age, location and income–deserves the opportunity to live their healthiest life. Today, however, there are still far too many barriers to good health which are disproportionately experienced by people of color, historically marginalized groups and those with lower incomes. We are committed to mitigating our impact on the environment and enabling and delivering equitable care that addresses health disparities and improves health outcomes — an enterprise priority reflected in our mission.
Posted 3 days ago
10.0 years
1 - 7 Lacs
Noida
On-site
Optum is a global organization that delivers care, aided by technology to help millions of people live healthier lives. The work you do with our team will directly improve health outcomes by connecting people with the care, pharmacy benefits, data and resources they need to feel their best. Here, you will find a culture guided by inclusion, talented peers, comprehensive benefits and career development opportunities. Come make an impact on the communities we serve as you help us advance health optimization on a global scale. Join us to start Caring. Connecting. Growing together. Primary Responsibilities: Design and implement infrastructure as code using tools like Terraform. Automate repetitive tasks to improve efficiency using Python and other scripting tools Expert level of experience with Apache, nginx, or Envoy or Medium level of experience with HAProxy Programming languages: Go or experience with Consul and Consul template Ability to leverage AI tools for day to day actions or activities to build scaling application Build, manage, and maintain CI/CD pipelines for seamless application deployments. Integrate with tools like Jenkins, GitHub Actions, GitLab CI, or Azure DevOps preferred to have exp or understanding about Agentic DevOps Monitor system performance and availability using monitoring tools like Prometheus, Grafana, or Datadog. Develop strategies for improving system reliability and uptime. Respond to incidents, troubleshoot issues, and conduct root cause analysis, Ideal candidate will have the ability to setup AIOPS ecosystem and reduce the Ops overhead Deploy, manage, and scale containerized applications using Docker and Kubernetes. Implement Kubernetes cluster configurations, including monitoring, scaling, and upgrades Use tools like Ansible or Chef for configuration management and provisioning Work with tools like Consul for service discovery and dynamic configuration. Manage network configurations, DNS, and load balancers Use tools like Packer to automate the creation of machine images. Ensure standardized environments across development, testing, and production Manage and deploy cloud infrastructure on AWS, Azure, or Google Cloud. Optimize cloud resources to ensure cost-efficiency and scalability Collaborate with development and QA teams to ensure smooth software delivery. Provide mentorship to junior team members on best practices and technologies Lead system implementation projects, ensuring that new systems are integrated seamlessly into existing infrastructure Consult with clients and stakeholders to understand their needs and provide expert advice on system integration and implementation strategies Develop and execute integration plans, ensuring that all components work together effectively Provide training and support to clients and internal teams on new systems and technologies Stay updated with the latest industry trends and technologies to provide cutting-edge solutions Participate in rotational shifts to provide 24/7 support for critical systems and infrastructure and promptly addressing any issues that arise Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so Required Qualifications: Bachelor’s degree in Computer Science, Engineering, or a related field Certifications in Kubernetes, Terraform, or any public cloud platform like AWS, Azure, or GCP 10+ years of experience in a DevOps role or similar Experience with distributed systems and microservices architecture DevOps Tools: Solid experience with tools like Terraform, Kubernetes, Docker, Packer, and Consul CI/CD Pipelines: Hands-on experience with CI/CD tools (Jenkins, GitLab CI, GitHub Actions, etc.) Version Control: Experience with Git and branching strategies System Implementation & Integration: Proven experience in system implementation and integration projects Programming Languages: Proficiency in Python and experience with other scripting languages (e.g., Bash) Cloud Platforms: Familiarity with AWS, Azure, or GCP services. Proven experience implementing Public Cloud Services using Terraform within Terraform Enterprise or HCP Terraform Infrastructure as Code: Proficiency in tools like Terraform and Ansible. Proven experience in authoring Terraform and shared Terraform Modules Monitoring and Logging: Knowledge of monitoring tools (e.g., Prometheus, Grafana, Datadog) and logging tools (e.g., ELK Stack) Consulting Skills: Demonstrated ability to consult with clients and stakeholders to understand their needs and provide expert advice Soft Skills: Solid analytical and problem-solving skills Proven excellent communication and collaboration abilities Ability to work in an agile and fast-paced environment At UnitedHealth Group, our mission is to help people live healthier lives and make the health system work better for everyone. We believe everyone–of every race, gender, sexuality, age, location and income–deserves the opportunity to live their healthiest life. Today, however, there are still far too many barriers to good health which are disproportionately experienced by people of color, historically marginalized groups and those with lower incomes. We are committed to mitigating our impact on the environment and enabling and delivering equitable care that addresses health disparities and improves health outcomes — an enterprise priority reflected in our mission.
Posted 3 days ago
4.0 years
10 - 20 Lacs
Noida
Remote
Job Title: Node.js Developer Job Type: Full-Time Work Location: Remote/Onsite (as per project requirements) Experience: 4 to 7 Years (Minimum 4 years onsite preferred; 5+ years remote acceptable) Pass-Out Year: 2019 or earlier (Mandatory) Notice Period: Immediate to max 15 days Career Gap: Should not have a gap of more than 3 months About the Role: We are seeking a skilled and experienced Node.js Developer to join our dynamic engineering team. The ideal candidate will be responsible for developing and maintaining scalable backend solutions, integrating with Kafka messaging systems, and implementing CI/CD pipelines in a DevOps-oriented environment. Key Responsibilities: Develop and maintain robust backend applications using Node.js Integrate microservices and backend systems with Kafka Collaborate with DevOps teams to design and maintain CI/CD pipelines Ensure code quality, security, and scalability across all services Troubleshoot and debug complex technical issues in production and development environments Work closely with cross-functional teams (Frontend, QA, DevOps) to deliver high-quality products Contribute to code reviews and architectural decisions Mandatory Skills: Strong experience with Node.js and asynchronous programming patterns Proficient in integrating and working with Apache Kafka Hands-on experience with DevOps practices and tools (Docker, Kubernetes, etc.) Expertise in setting up and managing CI/CD pipelines (GitLab CI, Jenkins, GitHub Actions, etc.) Knowledge of RESTful APIs and microservice architecture Experience with cloud platforms (AWS/GCP/Azure) Additional Requirements: Must have graduated in or before 2019 Must have 4-7 years of total experience (4-5 years onsite OR 5-7 years remote) Must not have been unemployed for more than 3 months Strong communication and collaboration skills Self-starter with a problem-solving mindset Nice to Have: Experience with TypeScript Familiarity with monitoring tools like Prometheus, Grafana, ELK Agile development practices (Scrum/Kanban) Job Type: Full-time Pay: ₹1,000,000.00 - ₹2,000,000.00 per year Schedule: Monday to Friday Experience: Node.js: 4 years (Required) Work Location: In person
Posted 3 days ago
1.0 - 3.0 years
9 - 10 Lacs
Noida
On-site
About the Role We are looking for a passionate and skilled Software Engineer with strong Node.js experience to join our backend engineering team in Noida . You’ll be working on a high-scale platform that processes over a million transactions per day , along with tens of millions of other operations including bill fetches, notifications, and reminders. Our systems are engineered to maintain five 9s (99.999%) availability , ensuring ultra-high reliability and performance. You will play a critical role in building and maintaining backend services that support this scale, contributing to the development of distributed, resilient systems while collaborating with cross-functional teams. Key Responsibilities Design, develop, and maintain backend services using Node.js.Build and enhance distributed systems and microservices that support our products at scale.Collaborate closely with frontend, QA, and product teams to deliver seamless features and functionality.Ensure performance, scalability, and reliability of backend components.Integrate and manage systems such as Kafka, Redis, or Cassandra as part of the tech stack.Maintain clean, testable code and contribute to continuous integration pipelines.Explore and leverage AI-powered development tools (e.g., GitHub Copilot, Tabnine) to boost development efficiency. Required Qualifications 1–3 years of hands-on experience with Node.js in backend development.Exposure to distributed system architecture and microservices.Solid understanding of data structures, algorithms, and asynchronous programmingExperience working with RESTful APIs Preferred Qualifications Experience working with Java in backend or service-layer components.Hands-on experience with AI-assisted development tools.Exposure to cloud environments such as AWS, GCP, or Azure is beneficial. Why Join Us? Work with a modern, high-scale tech platform handling millions of operations daily.Use cutting-edge AI development tools in your daily workflow.Collaborative, fast-paced work environment with cross-functional ownership.Competitive compensation and career growth opportunities. Thanks and Regards Anuj Kanojia
Posted 3 days ago
2.0 years
3 - 8 Lacs
Āgra
On-site
Department Information Technology Role Backend Developer Locations Agra, Uttar Pradesh About Expertbells Expertbells is a startup support platform dedicated to helping entrepreneurs launch, grow, and scale their businesses through expert guidance, mentorship, and end-to-end services. Join our fast-growing team in Agra and be part of India's thriving startup ecosystem! About the Role We are hiring a Backend Developer to join our core tech team at Expertbells . In this role, you will build robust, scalable, and secure backend systems that power our web and mobile applications. You will handle everything from designing databases to integrating third-party services, ensuring seamless operations behind the scenes. Your work will directly impact how thousands of users register businesses, raise funds, and access mentorship on our digital platform. Location: Agra (On-site) Experience: Minimum 2 Years Key Responsibilities Develop and maintain backend services using Node.js . Design, implement, and manage MongoDB databases. Build, manage, and optimize RESTful APIs . Ensure API security, scalability, and performance. Collaborate closely with frontend developers ( React Native / React JS ) and DevOps teams. Integrate third-party services and external APIs as needed. Troubleshoot, debug, and upgrade backend applications. Requirements Minimum 2 years of experience as a backend developer. Proficiency in Node.js and MongoDB . Strong understanding of Express.js and backend architecture. Solid experience in building and securing RESTful APIs. Familiarity with API authentication and authorization ( JWT, OAuth ). Understanding of performance optimization, caching, and database indexing. Experience with version control systems ( Git/GitHub ). Ability to work in a collaborative, fast-paced startup environment. Good to Have Experience with cloud platforms ( AWS, GCP, Azure ) or Firebase . Familiarity with microservices or serverless architecture. Experience with socket.io, real-time applications, or push notifications. Basic knowledge of CI/CD pipelines and containerization ( Docker ). Why Join Expertbells? Be part of a rapidly growing startup solving real problems for entrepreneurs. Work on impactful products used by thousands of startups. Fast career growth with learning opportunities. Friendly and collaborative team culture.
Posted 3 days ago
0 years
0 Lacs
Noida
On-site
Noida,Uttar Pradesh,India +1 more Job ID 768828 Join our Team About this opportunity: We are delighted to announce an exciting opportunity to join Ericsson as a Developer. Within this role, you will have the chance to develop and maintain an array of products and services, spanning across components, units, nodes, networks, systems, and solutions. Your development journey in this role will range from requirement analysis, system design, architecture design, hardware design, software design, integration, simulation, tools design, Product Lifecycle Management (PLM) support, to creating product documentation. The role resonates profoundly with the Ericsson Product Development Principles. Please What you will do: Public & Private Cloud administration Troubleshooting of OS & Hardware related issues Manage Azure Products and Services (IaaS, PaaS, SaaS offerings) as per application requirements Manage Azure alert, Monitoring & Log analytics CI/CD Tools: Azure DevOps, GitHub, Jenkins Subscription & Azure Cost management, OS Licensing Knowledge of Terraform, Ansible Linux & Windows Server administration Ensure Server & VM Health monitoring Security & Vulnerability remediation Support the team member in health check and collect the logging data Interface with Cloud support team for OS, hardware Incidents Manage various aspects of application infrastructure - identity and access management, network troubleshooting, security Experience on Bash and PowerShell Configuration & administration of Hypervisor (ESXi, Hyper-v, vCenter) Basic knowledge of Network and security (Firewall , Load balancer , Endpoints , Vulnerability and Patch management) The skills you bring: Qualifications: B. E /B. tech/MTech in Computer Science / MCA Azure certification: AZ-104 or similar Should have good communication skills Candidate should be ready to learn new technology Should be good team player Basic Understanding of Server Hardware & configuration (Dell /HP/Lenovo) Basic Understanding of cloud platform (e.g., Azure /AWS/GCP) with hands-on experience on at least one Problem solving skills Adaptability and Flexibility Maturity and a Professional Attitude
Posted 3 days ago
2.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Introduction We are seeking a Software Developer with expertise in Java, SQL, and cloud platforms like AWS (preffered), Azure or GCP Your Role And Responsibilities We are looking for a Software Developer with expertise in Java, SQL, and Cloud Service Providers (CSPs) like AWS. The ideal candidate should be a quick learner with a strong problem-solving mindset and the ability to work in a dynamic, fast-paced environment. Responsibilities Develop, test, and maintain scalable applications using Java. Work with cloud platforms (AWS, GCP, or Azure) for deployment and infrastructure management. Troubleshoot and debug issues in production and development environments. Collaborate with cross-functional teams to understand requirements and deliver solutions. Follow best practices for security, performance, and maintainability. Continuously learn and adapt to new technologies and frameworks. Preferred Education Bachelor's Degree Required Technical And Professional Expertise 2+ years of experience. Strong Proficiency in Java. Strong understanding of SQL databases (PostgreSQL, MySQL). Data Structures and Algorithms NoSQL Database is a plus Experience with AWS, GCP, or Azure for cloud deployments. Ability to quickly learn new technologies and frameworks. Familiarity with version control tools like Git Strong analytical and problem-solving skills. Preferred Technical And Professional Experience Knowledge of DevOps tools like Docker, Kubernetes, Terraform. Experience with RESTful API development. Exposure to CI/CD pipelines.
Posted 3 days ago
5.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Required skilled and passionate Node.js Developer with 3–5 years of experience to join our dynamic team. The ideal candidate should have hands-on experience with cloud platforms (AWS, Azure, or GCP), and be proficient in data streaming, audio stream processing, WebSockets, and authentication/authorization mechanisms. You will play a key role in building scalable, real-time applications that deliver high performance and reliability. Key Responsibilities Design, develop, and maintain server-side applications using Node.js. Implement and manage real-time data and audio streaming solutions. Integrate and manage WebSocket connections for real-time communication. Develop secure APIs with robust authentication and authorization mechanisms (OAuth2, JWT, etc.). Deploy and manage applications on cloud platforms (AWS, Azure, or GCP). Collaborate with front-end developers, DevOps, and QA teams to deliver high-quality products. Write clean, maintainable, and well-documented code. Monitor application performance and troubleshoot issues. Required Skills Strong proficiency in Node.js and JavaScript/TypeScript. Experience with cloud services (AWS Lambda, EC2, S3, Azure Functions, GCP Cloud Functions, etc.). Solid understanding of data streaming and audio stream processing. Expertise in WebSocket protocols and real-time communication. Knowledge of authentication/authorization standards and best practices. Familiarity with RESTful APIs and microservices architecture. Experience with version control systems like Git. Good understanding of CI/CD pipelines and containerization (Docker/Kubernetes is a plus). Preferred Qualifications Good to have certifications in cloud technologies (AWS Certified Developer, Azure Developer Associate, etc.). Experience with message brokers like Kafka, RabbitMQ, or MQTT. Familiarity with Agile/Scrum methodologies. Soft Skills Strong communication skills — both verbal and written Excellent problem-solving and debugging skills Self-motivated with the ability to work independently and in a team Comfortable working with stakeholders across different time zones Working Hours General Shift: 1:30 PM to 11:30 PM IST Flexibility to extend hours based on critical deployments or support needs
Posted 3 days ago
4.0 - 6.0 years
6 - 8 Lacs
Indore
On-site
Job Overview: Seeking an experienced Data Engineer who had 4 to 6 years experience to design, build, and maintain scalable data infrastructure and pipelines. You'll work with cross-functional teams to ensure reliable data flow from various sources to analytics platforms, enabling data-driven decision making across the organization. Key Responsibilities: Data Pipeline Development Design and implement robust ETL/ELT pipelines using tools like Apache Airflow, Spark, or cloud-native solutions Build real-time and batch processing systems to handle high-volume data streams Optimize data workflows for performance, reliability, and cost-effectiveness Infrastructure & Architecture Develop and maintain data warehouses and data lakes using platforms like Snowflake, Redshift, BigQuery, or Databricks Implement data modeling best practices including dimensional modeling and schema design Architect scalable solutions on cloud platforms (AWS, GCP, Azure) Data Quality & Governance Implement data quality checks, monitoring, and alerting systems Establish data lineage tracking and metadata management Ensure compliance with data privacy regulations and security standards Collaboration & Support Partner with data scientists, analysts, and business stakeholders to understand requirements Provide technical guidance on data architecture decisions Mentor junior engineers and contribute to team knowledge sharing Required Qualifications: Technical Skills 4-6 years of experience in data engineering or related field Proficiency in Python, SQL, and at least one other programming language (Java, Scala, Go) Strong experience with big data technologies (Spark, Kafka, Hadoop ecosystem) Hands-on experience with cloud platforms and their data services Knowledge of containerization (Docker, Kubernetes) and infrastructure as code Job Types: Full-time, Permanent Pay: ₹600,000.00 - ₹800,000.00 per year Benefits: Health insurance Provident Fund Schedule: Day shift Monday to Friday Work Location: In person
Posted 3 days ago
0 years
0 - 0 Lacs
Indore
On-site
About the Role We are looking for a highly motivated Research Scientist Intern to join our AI/ML research team. You will contribute to cutting-edge research in machine learning, deep learning, natural language processing, or computer vision, helping to develop novel algorithms and publish results in top conferences. Responsibilities Conduct original research in AI/ML, including designing and implementing novel algorithms. Experiment with and evaluate state-of-the-art models on benchmark datasets. Analyze and interpret experimental results to drive improvements. Collaborate closely with research scientists and engineers to translate research findings into product features. Prepare technical reports and contribute to research papers or patents. Stay up-to-date with the latest research trends and breakthroughs in AI/ML. Qualifications Required: Currently pursuing or recently completed a Bachelor’s, Master’s, or PhD in Computer Science, Electrical Engineering, Mathematics, or a related field. Strong foundation in machine learning fundamentals and mathematical concepts (linear algebra, probability, statistics). Experience with ML frameworks such as TensorFlow, PyTorch, or JAX. Proficiency in Python and relevant scientific computing libraries (NumPy, Pandas, scikit-learn). Ability to read and implement research papers. Good problem-solving skills and independent thinking. Preferred: Experience with deep learning architectures (CNNs, RNNs, Transformers). Familiarity with natural language processing, computer vision, or reinforcement learning. Published or working towards publishing in AI/ML conferences (NeurIPS, ICML, CVPR, ACL). Familiarity with distributed computing or cloud platforms (AWS, GCP, Azure). What You Will Gain Hands-on experience in AI/ML research at a cutting-edge lab/company. Opportunity to work alongside leading scientists and engineers. Exposure to publishing research and presenting at conferences. Mentorship and professional development in the AI field. Job Type: Internship Contract length: 3 months Pay: ₹5,000.00 - ₹7,000.00 per month Ability to commute/relocate: Indore, Madhya Pradesh: Reliably commute or planning to relocate before starting work (Required) Work Location: In person
Posted 3 days ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
20312 Jobs | Dublin
Wipro
11977 Jobs | Bengaluru
EY
8165 Jobs | London
Accenture in India
6667 Jobs | Dublin 2
Uplers
6464 Jobs | Ahmedabad
Amazon
6352 Jobs | Seattle,WA
Oracle
5993 Jobs | Redwood City
IBM
5803 Jobs | Armonk
Capgemini
3897 Jobs | Paris,France
Tata Consultancy Services
3776 Jobs | Thane