Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
14.0 - 18.0 years
0 Lacs
karnataka
On-site
As a Performance Testing Engineer, you will be responsible for designing comprehensive performance testing strategies and leading initiatives to ensure system reliability, scalability, and responsiveness across applications. You will collaborate with cross-functional teams to conduct thorough performance assessments, including load testing, stress testing, and capacity planning, to identify system bottlenecks and areas for improvement. Your role will involve working closely with development and operations teams to identify key performance indicators (KPIs) and establish benchmarks, monitoring solutions, and dashboards that provide real-time insights into system performance. Additionally, you will architect and implement scalable testing frameworks for performance and data validation, with a focus on AI and Generative AI applications. In this position, you will lead the troubleshooting and resolution of complex performance-related issues in QA, Staging, Pre-production, and/or Production environments. Providing guidance and mentorship to junior QA engineers will be a key aspect of your responsibilities, fostering a culture of quality and continuous learning within the team. Utilizing industry-standard performance testing tools such as JMeter, LoadRunner, and Gatling, you will simulate real-world scenarios and measure system performance. It will be essential to collaborate with development, QA, and operations teams to integrate performance testing into the continuous integration and continuous deployment (CI/CD) processes, offering guidance and support on performance testing best practices. Your expertise will be needed to analyze CPU Utilization, Memory usage, Network usage, and Garbage Collection to verify the performance of applications. Generating performance graphs, session reports, and other related documentation required for validation and analysis will also be part of your responsibilities. Furthermore, you will create comprehensive performance test documentation, including test plans, test scripts, and performance analysis reports. You will effectively communicate performance testing results and recommendations to technical and non-technical stakeholders. To excel in this role, you should have a Bachelor's or Master's degree in computer science, engineering, or a related field, along with 14+ years of experience in performance testing and engineering. Proficiency in performance testing tools such as JMeter, LoadRunner, or Gatling, as well as programming languages like Python, Javascript, and Java, will be necessary. Experience with cloud technologies, containerization, web technologies, and application architecture is essential. Moreover, hands-on experience with performance test simulations, performance analysis, performance tuning, and monitoring in a microservices environment is required. Familiarity with SQL, cloud databases, AI/ML frameworks, data validation techniques, and DevOps practices will be beneficial for this role. Strong communication skills, the ability to analyze complex systems, and a proven track record of leading performance testing teams and driving initiatives through effective collaboration with cross-functional teams are key attributes for success in this position. Additionally, a good understanding of computer networks, networking concepts, and agile development practices will be valuable assets in fulfilling your responsibilities as a Performance Testing Engineer.,
Posted 1 week ago
8.0 - 10.0 years
0 Lacs
Pune, Maharashtra, India
On-site
The Applications Development Senior Programmer Analyst is an intermediate level position responsible for participation in the establishment and implementation of new or revised application systems and programs in coordination with the Technology team. The overall objective of this role is to contribute to applications systems analysis and programming activities. Responsibilities: Conduct tasks related to feasibility studies, time and cost estimates, IT planning, risk technology, applications development, model development, and establish and implement new or revised applications systems and programs to meet specific business needs or user areas Monitor and control all phases of development process and analysis, design, construction, testing, and implementation as well as provide user and operational support on applications to business users Utilize in-depth specialty knowledge of applications development to analyze complex problems/issues, provide evaluation of business process, system process, and industry standards, and make evaluative judgement Recommend and develop security measures in post implementation analysis of business usage to ensure successful system design and functionality Consult with users/clients and other technology groups on issues, recommend advanced programming solutions, and install and assist customer exposure systems Ensure essential procedures are followed and help define operating standards and processes Serve as advisor or coach to new or lower level analysts Has the ability to operate with a limited level of direct supervision. Can exercise independence of judgement and autonomy. Acts as SME to senior stakeholders and /or other team members. Appropriately assess risk when business decisions are made, demonstrating particular consideration for the firm's reputation and safeguarding Citigroup, its clients and assets, by driving compliance with applicable laws, rules and regulations, adhering to Policy, applying sound ethical judgment regarding personal behavior, conduct and business practices, and escalating, managing and reporting control issues with transparency. Qualifications: 8-10 years of relevant experience Experience in systems analysis and programming of software applications Experience in managing and implementing successful projects Working knowledge of consulting/project management techniques/methods Ability to work under pressure and manage deadlines or unexpected changes in expectations or requirements Education: Bachelor’s degree/University degree or equivalent experience This job description provides a high-level review of the types of work performed. Other job-related duties may be assigned as required. Strong experience and knowledge in Java 8/11/17, J2ee, Multithreading, Micro services (Spring Boot), Web services, Design Patterns. Extensive experience working on Angular JS, JavaScript OpenShift, Docker, exposure to messaging platforms like Solace, Kafka and no SQL DB like Elasticsearch Basic Knowledge of Database-Oracle/SQL, Python. Knowledge of Gen AI Strong experience and knowledge in Java 8/11/17, J2ee, Multithreading, Micro services (Spring Boot), Web services, Design Patterns. Exposure to messaging platforms like Solace, Kafka and no SQL DB like Elasticsearch will be added advantage. Basic Knowledge of Database-Oracle/SQL, Python. Knowledge of Gen AI will be added advantage. ------------------------------------------------------ Job Family Group: Technology ------------------------------------------------------ Job Family: Applications Development ------------------------------------------------------ Time Type: Full time ------------------------------------------------------ Most Relevant Skills Please see the requirements listed above. ------------------------------------------------------ Other Relevant Skills For complementary skills, please see above and/or contact the recruiter. ------------------------------------------------------ Citi is an equal opportunity employer, and qualified candidates will receive consideration without regard to their race, color, religion, sex, sexual orientation, gender identity, national origin, disability, status as a protected veteran, or any other characteristic protected by law. If you are a person with a disability and need a reasonable accommodation to use our search tools and/or apply for a career opportunity review Accessibility at Citi. View Citi’s EEO Policy Statement and the Know Your Rights poster.
Posted 1 week ago
5.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
About PhonePe Limited: Headquartered in India, its flagship product, the PhonePe digital payments app, was launched in Aug 2016. As of April 2025, PhonePe has over 60 Crore (600 Million) registered users and a digital payments acceptance network spread across over 4 Crore (40+ million) merchants. PhonePe also processes over 33 Crore (330+ Million) transactions daily with an Annualized Total Payment Value (TPV) of over INR 150 lakh crore. PhonePe’s portfolio of businesses includes the distribution of financial products (Insurance, Lending, and Wealth) as well as new consumer tech businesses (Pincode - hyperlocal e-commerce and Indus AppStore Localized App Store for the Android ecosystem) in India, which are aligned with the company’s vision to offer every Indian an equal opportunity to accelerate their progress by unlocking the flow of money and access to services. Culture: At PhonePe, we go the extra mile to make sure you can bring your best self to work, Everyday!. And that starts with creating the right environment for you. We empower people and trust them to do the right thing. Here, you own your work from start to finish, right from day one. PhonePe-rs solve complex problems and execute quickly; often building frameworks from scratch. If you’re excited by the idea of building platforms that touch millions, ideating with some of the best minds in the country and executing on your dreams with purpose and speed, join us! About The Role As an SRE (5 to 7 years) (Big Data) Engineer at PhonePe, you will be responsible for ensuring the stability, scalability, and performance of distributed systems operating at scale. You will collaborate with development, infrastructure, and data teams to automate operations, reduce manual efforts, handle incidents, and continuously improve system reliability. This role requires strong problem-solving skills, operational ownership, and a proactive approach to mentoring and driving engineering excellence. Roles And Responsibilities Ensure the ongoing stability, scalability, and performance of PhonePe’s Hadoop ecosystem and associated services. Manage and administer Hadoop infrastructure including HDFS, HBase, Hive, Pig, Airflow, YARN, Ranger, Kafka, Pinot, and Druid. Automate BAU operations through scripting and tool development. Perform capacity planning, system tuning, and performance optimization. Set-up, configure, and manage Nginx in high-traffic environments. Administration and troubleshooting of Linux + Bigdata systems, including networking (IP, Iptables, IPsec). Handle on-call responsibilities, investigate incidents, perform root cause analysis, and implement mitigation strategies. Collaborate with infrastructure, network, database, and BI teams to ensure data availability and quality. Apply system updates, patches, and manage version upgrades in coordination with security teams. Build tools and services to improve observability, debuggability, and supportability. Participate in Kerberos and LDAP administration. Experience in capacity planning and performance tuning of Hadoop clusters. Work with configuration management and deployment tools like Puppet, Chef, Salt, or Ansible. Skills Required Minimum 1 year of Linux/Unix system administration experience. Over 4 years of hands-on experience in Hadoop administration. Minimum 1 years of experience managing infrastructure on public cloud platforms like AWS, Azure, or GCP (optional ) . Strong understanding of networking, open-source tools, and IT operations. Proficient in scripting and programming (Perl, Golang, or Python). Hands-on experience with maintaining and managing the Hadoop ecosystem components like HDFS, Yarn, Hbase, Kafka . Strong operational knowledge in systems (CPU, memory, storage, OS-level troubleshooting). Experience in administering and tuning relational and NoSQL databases. Experience in configuring and managing Nginx in production environments. Excellent communication and collaboration skills. Good to Have Experience designing and maintaining Airflow DAGs to automate scalable and efficient workflows. Experience in ELK stack administration. Familiarity with monitoring tools like Grafana, Loki, Prometheus, and OpenTSDB. Exposure to security protocols and tools (Kerberos, LDAP). Familiarity with distributed systems like elasticsearch or similar high-scale environments. PhonePe Full Time Employee Benefits (Not applicable for Intern or Contract Roles) Insurance Benefits - Medical Insurance, Critical Illness Insurance, Accidental Insurance, Life Insurance Wellness Program - Employee Assistance Program, Onsite Medical Center, Emergency Support System Parental Support - Maternity Benefit, Paternity Benefit Program, Adoption Assistance Program, Day-care Support Program Mobility Benefits - Relocation benefits, Transfer Support Policy, Travel Policy Retirement Benefits - Employee PF Contribution, Flexible PF Contribution, Gratuity, NPS, Leave Encashment Other Benefits - Higher Education Assistance, Car Lease, Salary Advance Policy Our inclusive culture promotes individual expression, creativity, innovation, and achievement and in turn helps us better understand and serve our customers. We see ourselves as a place for intellectual curiosity, ideas and debates, where diverse perspectives lead to deeper understanding and better quality results. PhonePe is an equal opportunity employer and is committed to treating all its employees and job applicants equally; regardless of gender, sexual preference, religion, race, color or disability. If you have a disability or special need that requires assistance or reasonable accommodation, during the application and hiring process, including support for the interview or onboarding process, please fill out this form. Read more about PhonePe on our blog . Life at PhonePe PhonePe in the news
Posted 1 week ago
5.0 years
5 - 10 Lacs
Hyderābād
On-site
DESCRIPTION The AOP (Analytics Operations and Programs) team is responsible for creating core analytics, insight generation and science capabilities for ROW Ops. We develop scalable analytics applications, AI/ML products and research models to optimize operation processes. You will work with Product Managers, Data Engineers, Data Scientists, Research Scientists, Applied Scientists and Business Intelligence Engineers using rigorous quantitative approaches to ensure high quality data/science products for our customers around the world. We are looking for a Sr.Data Scientist to join our growing Science Team. As Data Scientist, you are able to use a range of science methodologies to solve challenging business problems when the solution is unclear. You will be responsible for building ML models to solve complex business problems and test them in production environment. The scope of role includes defining the charter for the project and proposing solutions which align with org's priorities and production constraints but still create impact. You will achieve this by leveraging strong leadership and communication skills, data science skills and by acquiring domain knowledge pertaining to the delivery operations systems. You will provide ML thought leadership to technical and business leaders, and possess ability to think strategically about business, product, and technical challenges. You will also be expected to contribute to the science community by participating in science reviews and publishing in internal or external ML conferences. Our team solves a broad range of problems that can be scaled across ROW (Rest of the World including countries like India, Australia, Singapore, MENA and LATAM). Here is a glimpse of the problems that this team deals with on a regular basis: Using live package and truck signals to adjust truck capacities in real-time HOTW models for Last Mile Channel Allocation Using LLMs to automate analytical processes and insight generation Ops research to optimize middle mile truck routes Working with global partner science teams to affect Reinforcement Learning based pricing models and estimating Shipments Per Route for $MM savings Deep Learning models to synthesize attributes of addresses Abuse detection models to reduce network losses Key job responsibilities 1. Use machine learning and analytical techniques to create scalable solutions for business problems Analyze and extract relevant information from large amounts of Amazon’s historical business data to help automate and optimize key processes 2. Design, develop, evaluate and deploy, innovative and highly scalable ML/OR models 3. Work closely with other science and engineering teams to drive real-time model implementations 4. Work closely with Ops/Product partners to identify problems and propose machine learning solutions 5. Establish scalable, efficient, automated processes for large scale data analyses, model development, model validation and model maintenance 6. Work proactively with engineering teams and product managers to evangelize new algorithms and drive the implementation of large-scale complex ML models in production 7. Leading projects and mentoring other scientists, engineers in the use of ML techniques BASIC QUALIFICATIONS 5+ years of data scientist experience Experience with data scripting languages (e.g. SQL, Python, R etc.) or statistical/mathematical software (e.g. R, SAS, or Matlab) Experience with statistical models e.g. multinomial logistic regression Experience in data applications using large scale distributed systems (e.g., EMR, Spark, Elasticsearch, Hadoop, Pig, and Hive) Experience working with data engineers and business intelligence engineers collaboratively Demonstrated expertise in a wide range of ML techniques PREFERRED QUALIFICATIONS Experience as a leader and mentor on a data science team Master's degree in a quantitative field such as statistics, mathematics, data science, business analytics, economics, finance, engineering, or computer science Expertise in Reinforcement Learning and Gen AI is preferred Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner.
Posted 1 week ago
5.0 years
5 - 10 Lacs
Gurgaon
On-site
DESCRIPTION The AOP (Analytics Operations and Programs) team is responsible for creating core analytics, insight generation and science capabilities for ROW Ops. We develop scalable analytics applications, AI/ML products and research models to optimize operation processes. You will work with Product Managers, Data Engineers, Data Scientists, Research Scientists, Applied Scientists and Business Intelligence Engineers using rigorous quantitative approaches to ensure high quality data/science products for our customers around the world. We are looking for a Sr.Data Scientist to join our growing Science Team. As Data Scientist, you are able to use a range of science methodologies to solve challenging business problems when the solution is unclear. You will be responsible for building ML models to solve complex business problems and test them in production environment. The scope of role includes defining the charter for the project and proposing solutions which align with org's priorities and production constraints but still create impact. You will achieve this by leveraging strong leadership and communication skills, data science skills and by acquiring domain knowledge pertaining to the delivery operations systems. You will provide ML thought leadership to technical and business leaders, and possess ability to think strategically about business, product, and technical challenges. You will also be expected to contribute to the science community by participating in science reviews and publishing in internal or external ML conferences. Our team solves a broad range of problems that can be scaled across ROW (Rest of the World including countries like India, Australia, Singapore, MENA and LATAM). Here is a glimpse of the problems that this team deals with on a regular basis: Using live package and truck signals to adjust truck capacities in real-time HOTW models for Last Mile Channel Allocation Using LLMs to automate analytical processes and insight generation Ops research to optimize middle mile truck routes Working with global partner science teams to affect Reinforcement Learning based pricing models and estimating Shipments Per Route for $MM savings Deep Learning models to synthesize attributes of addresses Abuse detection models to reduce network losses Key job responsibilities 1. Use machine learning and analytical techniques to create scalable solutions for business problems Analyze and extract relevant information from large amounts of Amazon’s historical business data to help automate and optimize key processes 2. Design, develop, evaluate and deploy, innovative and highly scalable ML/OR models 3. Work closely with other science and engineering teams to drive real-time model implementations 4. Work closely with Ops/Product partners to identify problems and propose machine learning solutions 5. Establish scalable, efficient, automated processes for large scale data analyses, model development, model validation and model maintenance 6. Work proactively with engineering teams and product managers to evangelize new algorithms and drive the implementation of large-scale complex ML models in production 7. Leading projects and mentoring other scientists, engineers in the use of ML techniques BASIC QUALIFICATIONS 5+ years of data scientist experience Experience with data scripting languages (e.g. SQL, Python, R etc.) or statistical/mathematical software (e.g. R, SAS, or Matlab) Experience with statistical models e.g. multinomial logistic regression Experience in data applications using large scale distributed systems (e.g., EMR, Spark, Elasticsearch, Hadoop, Pig, and Hive) Experience working with data engineers and business intelligence engineers collaboratively Demonstrated expertise in a wide range of ML techniques PREFERRED QUALIFICATIONS Experience as a leader and mentor on a data science team Master's degree in a quantitative field such as statistics, mathematics, data science, business analytics, economics, finance, engineering, or computer science Expertise in Reinforcement Learning and Gen AI is preferred Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner.
Posted 1 week ago
2.0 - 6.0 years
3 - 5 Lacs
India
On-site
Python Developer – Data Scraping, MongoDB, Solr / ElasticSearch Immediate Joiner Preferred We are seeking a skilled Python Developer with strong experience in web/data scraping and working knowledge of MongoDB, Solr, and/or ElasticSearch. You will be responsible for developing, maintaining, and optimizing scalable scraping scripts to collect structured and unstructured data, efficiently manage it in MongoDB, and index it for search and retrieval using Solr or ElasticSearch. Key Responsibilities: Design and develop robust web scraping solutions using Python (e.g., Scrapy, BeautifulSoup, Selenium, etc.). Extract and process large volumes of data from websites, APIs, and other digital sources. Ensure scraping mechanisms are efficient, resilient to site changes, and compliant with best practices. Store, retrieve, and manage scraped data efficiently in MongoDB databases. Index, manage, and optimize data search capabilities using Solr or ElasticSearch. Build data validation, cleaning, and transformation pipelines. Handle challenges like CAPTCHA solving, IP blocking, and dynamic content rendering. Monitor scraping jobs and troubleshoot errors and bottlenecks. Optimize scraping speed, search indexing, storage efficiency, and system scalability. Collaborate with product managers to define data requirements. Required Skills and Qualifications: 2 to 6 years of experience with Python, specifically in web scraping projects. Proficient in scraping libraries such as Scrapy, BeautifulSoup, Requests, Selenium, or similar. Hands-on experience with MongoDB (querying, indexing, schema design for unstructured/structured data). Strong experience with Solr or ElasticSearch for data indexing, search optimization, and querying. Good understanding of HTML, CSS, XPath, and JSON. Experience handling anti-scraping mechanisms like IP rotation, proxy usage, and headless browsers. Familiarity with RESTful APIs and parsing data formats like JSON, XML, CSV. Strong problem-solving skills and attention to detail. Good written and verbal communication skills. Job Type: Full-time Pay: ₹300,000.00 - ₹500,000.00 per year Experience: data scraping : 1 year (Preferred) MongoDB: 1 year (Preferred) Work Location: In person
Posted 1 week ago
7.0 years
5 - 6 Lacs
Bengaluru
On-site
About Exotel Exotel is one of Asia’s largest and most trusted customer engagement platforms. From voice to SMS, WhatsApp to AI-led contact centre intelligence, we help businesses deliver seamless, secure, and scalable conversations with their customers. As we grow, our focus remains on customer centricity, operational excellence, and smart automation to power the next generation of experiences. Platform Engineering @ Exotel The Platform Engineering group is responsible for the distributed systems and data infrastructure that power Exotel’s products. The team’s work directly impacts reliability, scalability, security, and performance across the entire Exotel stack. We abstract complex distributed systems and data management challenges to enable faster innovation, stronger reliability, and better business productivity. Role Overview As a Principal Engineer – Data & Infrastructure, you will be a technical leader driving the architecture, design, and implementation of high-scale, high-reliability distributed data systems. You will partner closely with product and engineering leaders to define the technical roadmap, guide engineering best practices, and mentor senior engineers. This is a hands-on leadership role, requiring deep technical expertise and the ability to influence and align teams toward long-term strategic goals. Key Responsibilities Own the architecture and delivery of data infrastructure projects: data pipelines, data analytics platforms, reporting frameworks, distributed databases, and messaging systems. Evaluate, adopt, and integrate emerging big data and distributed computing technologies to improve scalability, reliability, and performance. Collaborate with cross-functional teams on data modelling, architecture, and governance strategies. Provide technical leadership for design reviews, architecture discussions, and system optimisations. Mentor senior engineers and contribute to building a strong engineering culture. Drive operational excellence by implementing monitoring, alerting, and SLA adherence. Lead initiatives to optimise infrastructure costs, improve automation, and enhance deployment workflows. Be hands-on in solving complex engineering challenges related to distributed systems and low-latency data access. What We’re Looking For Must-haves 7–11 years of experience in software engineering, with at least 3+ years in a technical leadership role. Proven experience building and scaling data platforms (data pipelines, data APIs, reporting frameworks, and connectors). Strong experience with distributed databases (MySQL, Aerospike, Elasticsearch, Redis, etc.) and messaging systems. Expertise in Java, Go, or equivalent systems programming languages. Experience with Kubernetes, EKS Familiarity with few of the technologies like Prometheus, Grafana, ELK, Jenkins, VPN, Kafka etc. Experience leading engineering teams through architecture definition, execution, and delivery. A DevOps mindset - own what you build, from design to production operations Strong Computer Science fundamentals: algorithms, data structures, distributed systems design. Good-to-haves Hands-on experience with cloud platforms (AWS, GCP, Azure) and IaC tools (Ansible, Chef, Puppet, Terraform). Familiarity with AI/ML data pipelines. Experience operating production-scale distributed systems. Exposure to serverless, orchestration engines, or advanced big data frameworks.
Posted 1 week ago
5.0 years
5 - 10 Lacs
Bengaluru
On-site
DESCRIPTION The AOP (Analytics Operations and Programs) team is responsible for creating core analytics, insight generation and science capabilities for ROW Ops. We develop scalable analytics applications, AI/ML products and research models to optimize operation processes. You will work with Product Managers, Data Engineers, Data Scientists, Research Scientists, Applied Scientists and Business Intelligence Engineers using rigorous quantitative approaches to ensure high quality data/science products for our customers around the world. We are looking for a Sr.Data Scientist to join our growing Science Team. As Data Scientist, you are able to use a range of science methodologies to solve challenging business problems when the solution is unclear. You will be responsible for building ML models to solve complex business problems and test them in production environment. The scope of role includes defining the charter for the project and proposing solutions which align with org's priorities and production constraints but still create impact. You will achieve this by leveraging strong leadership and communication skills, data science skills and by acquiring domain knowledge pertaining to the delivery operations systems. You will provide ML thought leadership to technical and business leaders, and possess ability to think strategically about business, product, and technical challenges. You will also be expected to contribute to the science community by participating in science reviews and publishing in internal or external ML conferences. Our team solves a broad range of problems that can be scaled across ROW (Rest of the World including countries like India, Australia, Singapore, MENA and LATAM). Here is a glimpse of the problems that this team deals with on a regular basis: Using live package and truck signals to adjust truck capacities in real-time HOTW models for Last Mile Channel Allocation Using LLMs to automate analytical processes and insight generation Ops research to optimize middle mile truck routes Working with global partner science teams to affect Reinforcement Learning based pricing models and estimating Shipments Per Route for $MM savings Deep Learning models to synthesize attributes of addresses Abuse detection models to reduce network losses Key job responsibilities 1. Use machine learning and analytical techniques to create scalable solutions for business problems Analyze and extract relevant information from large amounts of Amazon’s historical business data to help automate and optimize key processes 2. Design, develop, evaluate and deploy, innovative and highly scalable ML/OR models 3. Work closely with other science and engineering teams to drive real-time model implementations 4. Work closely with Ops/Product partners to identify problems and propose machine learning solutions 5. Establish scalable, efficient, automated processes for large scale data analyses, model development, model validation and model maintenance 6. Work proactively with engineering teams and product managers to evangelize new algorithms and drive the implementation of large-scale complex ML models in production 7. Leading projects and mentoring other scientists, engineers in the use of ML techniques BASIC QUALIFICATIONS 5+ years of data scientist experience Experience with data scripting languages (e.g. SQL, Python, R etc.) or statistical/mathematical software (e.g. R, SAS, or Matlab) Experience with statistical models e.g. multinomial logistic regression Experience in data applications using large scale distributed systems (e.g., EMR, Spark, Elasticsearch, Hadoop, Pig, and Hive) Experience working with data engineers and business intelligence engineers collaboratively Demonstrated expertise in a wide range of ML techniques PREFERRED QUALIFICATIONS Experience as a leader and mentor on a data science team Master's degree in a quantitative field such as statistics, mathematics, data science, business analytics, economics, finance, engineering, or computer science Expertise in Reinforcement Learning and Gen AI is preferred Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner.
Posted 1 week ago
3.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Line of Service Advisory Industry/Sector Not Applicable Specialism SAP Management Level Associate Job Description & Summary A career within Application and Emerging Technology services will provide you with the opportunity to help organisations uncover enterprise insights and drive business results using smarter data analytics. We focus on a collection of organisational technology capabilities, including business intelligence, data management, and data assurance that help our clients drive innovation, growth, and change within their organisations in order to keep up with the changing nature of customers and technology. We make impactful decisions by mixing mind and machine to leverage data, understand and navigate risk, and help our clients gain a competitive edge. Why PWC At PwC, you will be part of a vibrant community of solvers that leads with trust and creates distinctive outcomes for our clients and communities. This purpose-led and values-driven work, powered by technology in an environment that drives innovation, will enable you to make a tangible impact in the real world. We reward your contributions, support your wellbeing, and offer inclusive benefits, flexibility programmes and mentorship that will help you thrive in work and life. Together, we grow, learn, care, collaborate, and create a future of infinite experiences for each other. Learn more about us. At PwC, we believe in providing equal employment opportunities, without any discrimination on the grounds of gender, ethnic background, age, disability, marital status, sexual orientation, pregnancy, gender identity or expression, religion or other beliefs, perceived differences and status protected by law. We strive to create an environment where each one of our people can bring their true selves and contribute to their personal growth and the firm’s growth. To enable this, we have zero tolerance for any discrimination and harassment based on the above considerations. " Responsibilities Build, Train, and Deploy ML Models using Python on Azure/AWS 3+ years of Experience in building Machine Learning and Deep Learning models in Python Experience on working on AzureML/AWS Sagemaker Ability to deploy ML models with REST based APIs Proficient in distributed computing environments / big data platforms (Hadoop, Elasticsearch, etc.) as well as common database systems and value stores (SQL, Hive, HBase, etc.) Ability to work directly with customers with good communication skills. Ability to analyze datasets using SQL, Pandas Experience of working on Azure Data Factory, PowerBI Experience on PySpark, Airflow etc. Experience of working on Docker/Kubernetes Mandatory Skill Sets Data Science, Machine Learning Preferred Skill Sets Data Science, Machine Learning Years Of Experience Required 3 - 6 Education Qualification B.Tech / M.Tech / MBA / MCA Education (if blank, degree and/or field of study not specified) Degrees/Field of Study required: Master of Business Administration, Bachelor of Engineering, Master of Engineering Degrees/Field Of Study Preferred Certifications (if blank, certifications not specified) Required Skills Data Science, Machine Learning Optional Skills Accepting Feedback, Accepting Feedback, Active Listening, Analytical Reasoning, Application Software, Business Data Analytics, Business Management, Business Technology, Business Transformation, Communication, Documentation Development, Emotional Regulation, Empathy, Implementation Research, Implementation Support, Implementing Technology, Inclusion, Intellectual Curiosity, Optimism, Performance Assessment, Performance Management Software, Problem Solving, Product Management, Product Operations, Project Delivery {+ 11 more} Desired Languages (If blank, desired languages not specified) Travel Requirements Not Specified Available for Work Visa Sponsorship? No Government Clearance Required? No Job Posting End Date
Posted 1 week ago
4.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
We're looking for a Senior Software Engineer This role is Hybrid, Pune Office As a Senior Software Engineer, you will be designing and delivering solutions that scale to meet the needs of some of the largest and most innovative organizations in the world. You will work with team members to understand and exceed the expectations of users, constantly pushing the technical envelope, and helping Cornerstone deliver great results. Working in an agile software development framework as a Scrum Lead, focused on development sprints and regular release cycles, you’ll own the complete technical delivery of the application and mentor juniors. In this role, you will… Work in a global team, delivering SaaS Software on private or public cloud. Develop, maintain, and enhance .NET applications and services to contribute to our cloud platform, across the entire technology stack, includingASP.NET, C#, .NET, CI/CD tooling, and infrastructure. Collaborate with the product team and the wider organization to plan, design, and scope new initiatives and features, and identify opportunities for enhancements. Own your code – and mentor team members in owning their code – all the way to production, including continuous deployment, monitoring, and troubleshooting. Be responsible for the overall quality of the product, including test coverage, including unit, integration, and automated functional tests. Provide guidance to other engineers and quality assurance staff to ensure our requirements for quality, security, scalability, and usability are met. Work independently with minimal supervision and provide leadership and mentorship to other software engineers. Consideration for privacy and security obligations Participate in key architectural decisions and design considerations. Troubleshoot complex production issues and provide detailed RCA You’ve Got What It Takes If You Have… Bachelor’s degree in computer science or equivalent experience 4-6+ years of web-based application development experience using ASP.NET, C#, .NET. Proficient experience with relational databases such as Microsoft SQL Server/MySQL Strong analytical & problem-solving skills, a keen sense of ownership, and a detail-oriented mindset Effective communication & persuasion skills Ability to effectively manage and correctly prioritize multiple streams of work. Ability to clearly communicate technical issues and project details. Experience delivering software in a Lean or Agile environment. Good team player with the ability to handle multiple concurrent priorities in a fast-paced environment. Passion for continuous process and technology improvement Excellent analytical, quantitative, and problem-solving abilities Conversant in algorithms, software design patterns, and their best usage. Self-motivated, requiring minimal oversight. Extra dose of awesome if you have... Up-to-date experience with Elasticsearch and OpenSearch Hands-on experience with AWS and Docker Experience in JavaScript frameworks like Angular or React.JS Experience with continuous deployment Experience in a startup environment or on a global software team Experience developing Microservices, RESTful services, or other SOA development experience. Our Culture Spark Greatness. Shatter Boundaries. Share Success. Are you ready? Because here, right now – is where the future of work is happening. Where curious disruptors and change innovators like you are helping communities and customers enable everyone – anywhere – to learn, grow and advance. To be better tomorrow than they are today. Who We Are Cornerstone powers the potential of organizations and their people to thrive in a changing world. Cornerstone Galaxy, the complete AI-powered workforce agility platform, meets organizations where they are. With Galaxy, organizations can identify skills gaps and development opportunities, retain and engage top talent, and provide multimodal learning experiences to meet the diverse needs of the modern workforce. More than 7,000 organizations and 100 million+ users in 180+ countries and in nearly 50 languages use Cornerstone Galaxy to build high-performing, future-ready organizations and people today. Check us out on LinkedIn , Comparably , Glassdoor , and Facebook !
Posted 1 week ago
6.0 years
0 Lacs
Pune, Maharashtra, India
Remote
Job Requisition ID # 25WD89345 Position Overview We are looking for a passionate Sr. Software Reliability Engineer to join our platform team in Pune, India. Our organisational ecosystem comprises Cloud services. Autodesk Platform Services (APS) is a cloud service platform that powers custom and pre-built applications, integrations, and innovative solutions. It offers APIs and web services to unlock the values of our customers' Design and Make data, and connects custom and end-to-end workflows. It is an opportunity to work on the APIs and services that directly impact the millions of users of Autodesk products. Reporting to the Sr. Manager of Engineering, you will contribute towards ensuring smooth functioning of Autodesk Platform APIs, which are the building blocks for next-generation design apps. In this hybrid role you will be part of an Agile product team building world-class cloud software applications and services. You will work in a global organisation and collaborate with local and remote colleagues from various disciplines like business, engineering, operations, support, etc. You will work with highly motivated and talented software engineers. As part of the team, you will learn, teach, grow, and help find innovative solutions to sophisticated and modern engineering problems. You will make critical choices, tackle hard problems, and improve the platform’s reliability, resiliency, and scalability. We are looking for someone who is enthusiastic about working in a team, can own and deliver long-term projects to completion, is detail and quality oriented, and excited about the prospects of having a big impact within Autodesk. Responsibilities Configure, improve cloud infrastructure for service availability, resiliency, performance, and cost efficiency with increasing load time over time Keep system updated in time for security compliance Be accountable for SLOs of the services by driving and improving the process including service reviews, fire drills and HA assessment Engage in technical discussions and technical decision-making Build tools to improve operational efficiency Troubleshoot for technical issues and find appropriate solutions Perform on-call rotation for in-time service recovery to guarantee the health of the production system Work together with other engineers in the scrum team in an Agile practice Minimum Qualifications Bachelor's Degree in related field; such as Computer Science or related technical field 6+ years of software engineering experience, including at least 3 year working experience as a Site Reliability Engineer accountable for SLOs Experience with Elasticsearch / OpenSearch is highly preferred Understand and curiosity of SRE best practices, architectures, and methods Experience with deployment and development on AWS Experience in Continuous Delivery methodologies and tools Good knowledge on resiliency patterns and cloud security Experience troubleshooting issues with users and teamwork spirit Experience of deployment with Terraform Proficiency in using observability tools such as Grafana, Open Telemetry, or Prometheus Experience with security compliance, such as SOC2 The Ideal Candidate A team-player, with a result-focused passion to deliver an overall solution. You embrace perpetual learning and are always ready for a new challenge Not only are you comfortable presenting demos of working software, but also addressing questions about progress Learn More About Autodesk Welcome to Autodesk! Amazing things are created every day with our software – from the greenest buildings and cleanest cars to the smartest factories and biggest hit movies. We help innovators turn their ideas into reality, transforming not only how things are made, but what can be made. We take great pride in our culture here at Autodesk – it’s at the core of everything we do. Our culture guides the way we work and treat each other, informs how we connect with customers and partners, and defines how we show up in the world. When you’re an Autodesker, you can do meaningful work that helps build a better world designed and made for all. Ready to shape the world and your future? Join us! Salary transparency Salary is one part of Autodesk’s competitive compensation package. Offers are based on the candidate’s experience and geographic location. In addition to base salaries, our compensation package may include annual cash bonuses, commissions for sales roles, stock grants, and a comprehensive benefits package. Diversity & Belonging We take pride in cultivating a culture of belonging where everyone can thrive. Learn more here: https://www.autodesk.com/company/diversity-and-belonging Are you an existing contractor or consultant with Autodesk? Please search for open jobs and apply internally (not on this external site).
Posted 1 week ago
3.0 years
0 Lacs
Gurugram, Haryana, India
On-site
For over four decades, PAR Technology Corporation (NYSE: PAR) has been a leader in restaurant technology, empowering brands worldwide to create lasting connections with their guests. Our innovative solutions and commitment to excellence provide comprehensive software and hardware that enable seamless experiences and drive growth for over 100,000 restaurants in more than 110 countries. Embracing our "Better Together" ethos, we offer Unified Customer Experience solutions, combining point-of-sale, digital ordering, loyalty and back-office software solutions as well as industry-leading hardware and drive-thru offerings. To learn more, visit partech.com or connect with us on LinkedIn, X (formerly Twitter), Facebook, and Instagram. Position Description We are seeking a Machine Learning Engineer to join our growing AI team at PAR . This role will focus on developing and scaling GenAI-powered services, recommender systems, and ML infrastructure that fuel personalized customer engagement. You will work across teams to drive technical excellence and real-world ML application impact. Position Location: Jaipur / Gurgaon Reports To [Hiring Manager Title – e.g., Head of AI or Senior Director, AI Engineering] Entrees (Requirements) What We’re Looking For: Master’s or PhD in Computer Science, Machine Learning, or a related field 3+ years of experience delivering production-ready machine learning solutions Deep understanding of ML algorithms, recommender systems, and NLP Experience with LLM frameworks (Hugging Face Transformers, LangChain, OpenAI API, Cohere) Strong proficiency in Python, including object-oriented design and scalable architecture Advanced expertise in Databricks: notebooks, MLflow tracking, data pipelines, job orchestration Hands-on experience with cloud-native technologies – preferably AWS (S3, Lambda, ECS/EKS, SageMaker) Experience working with modern data platforms: Delta Lake, Redis, Elasticsearch, NoSQL, BigQuery Strong verbal and written communication skills to translate technical work into business impact Flexibility to collaborate with global teams in PST/EST time zones when required With a Side Of (Additional Skills) Familiarity with vector databases (FAISS, ChromaDB, Pinecone, Weaviate) Experience with retrieval-augmented generation (RAG) and hybrid search systems Skilled in deploying ML APIs using FastAPI or Flask Background in text-to-SQL applications or domain-specific LLMs Knowledge of ML Ops practices: model versioning, automated retraining, monitoring Familiarity with CI/CD for ML pipelines via Databricks Repos, GitHub Actions, etc. Contributions to open-source ML or GenAI projects Experience in the restaurant/hospitality tech or digital marketing domain Unleash Your Potential: What You Will Be Doing and Owning: Build and deploy GenAI-powered microservices and personalized recommendation engines Design and manage Databricks data pipelines for training, feature engineering, and inference Develop high-performance ML APIs and integrate with frontend applications Implement retrieval pipelines with vector DBs and search engines Define and maintain ML Ops workflows for versioning, retraining, and monitoring Drive strategic architectural decisions for LLM-powered, multi-model systems Collaborate across product and engineering teams to embed intelligence in customer experiences Enable CI/CD for ML systems with modern orchestration tools Advocate for scalability, performance, and clean code in all deployed solutions Interview Process Interview #1: Phone Screen with Talent Acquisition Team Interview #2: Technical Interview – Round 1 with AI/ML Team (via MS Teams / F2F) Interview #3: Technical Interview – Round 2 with AI/ML Team (via MS Teams / F2F) Interview #4: Final Round with Hiring Manager and Cross-functional Stakeholders (via MS Teams / F2F PAR is proud to provide equal employment opportunities (EEO) to all employees and applicants for employment without regard to race, color, religion, sex, national origin, age, disability or genetics. We also provide reasonable accommodations to individuals with disabilities in accordance with applicable laws. If you require reasonable accommodation to complete a job application, pre-employment testing, a job interview or to otherwise participate in the hiring process, or for your role at PAR, please contact accommodations@partech.com. If you’d like more information about your EEO rights as an applicant, please visit the US Department of Labor's website.
Posted 1 week ago
5.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Company Profile Shipskart is renowned for being a leading player as an e-market place for the Maritime (Shipping) Industry, delivering an array of products ranging from FMCG, consumable products and spare parts to ships around the globe. It is an innovative marine eCommerce company with an established base of live paying customers. We're revolutionizing how maritime businesses approach vessel supply chain management through cutting-edge technology and a user-centric mindset. Profile Description: Full Stack Developer - Global Multi-Tenant Enterprise SaaS Platform About the Role We are building the next-generation, global multi-tenant SaaS platform for Shipskart, serving maritime and enterprise customers worldwide. This platform will be powered by a cloud-native, microservices architecture with a heterogeneous tech stack combining .NET Core/NodeJS/Python and modern frontend technologies like React/Angular. We are looking for a hands-on Full Stack Developer who thrives in a complex, distributed environment and can work across backend, frontend, and cloud infrastructure layers. Key Responsibilities: Design, develop, and maintain scalable microservices in .NET Core, Node.js, and Python. Implement data persistence with PostgreSQL (via EF Core/Dapper) and MongoDB including schema design, query optimization, and indexing strategies. Build secure, responsive, and high-performance frontend applications in React or Angular. Develop integrations with AWS and/or Azure cloud services (S3, KMS, Secrets Manager, SQS, SES, Event Hub, etc.). Implement multi-tenant architecture patterns with proper data isolation and access control. Write efficient, reusable, and secure code following best practices and OWASP guidelines. Implement application logging, distributed tracing, and performance monitoring. Work with containerized environments using Docker and manage deployments via CI/CD pipelines (GitHub Actions / Azure DevOps / Jenkins). Collaborate with cross-functional teams (DevOps, QA, Product, UI/UX) to deliver end-to-end features. Participate in code reviews, architecture discussions, and performance tuning. Contribute to data analytics & AI pipelines using Python where needed. Required Skills & Experience : 5+ years of full-stack development experience in enterprise SaaS platforms. Strong experience in .NET Core for backend services with Entity Framework Core and/or Dapper. Solid hands-on experience with MongoDB for document data storage. Experience in Node.js for cross-cutting services (e.g., API gateway, middleware). Proficiency in Python for analytics, ML/AI integrations, or automation scripts. Strong SQL and NoSQL database design and optimization skills. Experience with React.js or Angular for frontend development. Cloud-native development experience with AWS and/or Azure SDKs. Knowledge of KMS encryption/decryption, secret key management, and secure API design. Familiarity with messaging queues (AWS SQS, Azure Service Bus, RabbitMQ, Kafka). Hands-on experience with Docker and container orchestration concepts (Kubernetes exposure is a plus). Experience with CI/CD pipelines and DevOps practices. Strong understanding of RESTful APIs, GraphQL (preferred), and microservices patterns. Application logging & monitoring experience with tools like ELK, Prometheus, Grafana, or Azure Monitor. Strong debugging, profiling, and performance optimization skills. Preferred Skills / Experience : Experience with multi-tenant SaaS architectures. Familiarity with CQRS and Event Sourcing patterns. Experience with Typesense / ElasticSearch for search services. Exposure to gRPC and WebSockets for real-time communications. Experience in maritime or e-commerce domain is a plus. Required Soft Skills : Strong problem-solving skills with a product-oriented mindset. Excellent communication and collaboration abilities. Ability to thrive in fast-paced, agile environments. Self-driven, ownership-focused, and detail-oriented. What We Offer : Opportunity to be part of building a global SaaS platform from the ground up. Exposure to cutting-edge cloud-native technologies. Collaborative, innovative, and inclusive work culture. Competitive salary with performance-based incentives. Location : Noida, Uttar Pradesh Salary : Upto 9.0 LPA Joining : Immediate Interested Candidates meeting all the 'Required Skills & Experience' mentioned above, may share their updated resume at vinita@shipskart.com / hr@shipskart.com
Posted 1 week ago
5.0 years
0 Lacs
Nagpur, Maharashtra, India
On-site
Golang Backend Developer – Expert Level Location: Nagpur, India – Full-Time, In-Office Type: Backend Software Engineering Experience: Senior / Expert Level Trademarkia is seeking an experienced Golang Backend Developer to join our high-performance engineering team in Nagpur, India . You will be responsible for architecting, building, and optimizing scalable backend systems that power our global intellectual property platforms, serving thousands of clients worldwide. Our backend services handle trademark, patent, and litigation workflows , advanced search capabilities, AI-assisted legal drafting, and real-time data integrations with government and third-party APIs. As an expert, you’ll be expected to lead backend projects from design to deployment, mentor junior engineers, and ensure best practices across code quality, security, and performance. Key Responsibilities Backend Architecture & Development: Design, develop, and maintain scalable, high-performance APIs and backend services using Golang . Database Management: Work with PostgreSQL and other relational databases for optimal data modeling, indexing, and query performance. API Integration: Build and maintain secure integrations with third-party APIs, including government IP registries and payment systems. Performance Optimization: Profile and tune code for low-latency, high-throughput applications. Security & Compliance: Implement security best practices for sensitive client data in compliance with international privacy regulations. Testing & Quality Assurance: Write unit, integration, and load tests to ensure system reliability. Mentorship & Code Reviews: Guide junior developers, review code, and enforce best practices. DevOps Collaboration: Work closely with DevOps engineers for CI/CD pipelines, containerization (Docker), and deployment on AWS. Required Skills & Experience 5+ years of backend development experience, with 3+ years in Golang . Strong knowledge of Golang concurrency patterns , memory management, and performance tuning. Expertise in PostgreSQL (schema design, query optimization, migrations). Experience with RESTful API design and GraphQL (preferred). Familiarity with message queues (Kafka, RabbitMQ, or similar) for asynchronous processing. Proficiency in Docker and containerized application deployment. Experience with AWS services such as Lambda, EC2, S3, RDS, and API Gateway. Strong debugging and troubleshooting skills for production environments. Solid understanding of security practices (OWASP, API authentication/authorization, encryption). Preferred Qualifications Experience with microservices architecture and distributed systems. Background in search indexing technologies (Elasticsearch, Meilisearch, or similar). Exposure to AI/ML model integration into backend services. Familiarity with Terraform or Infrastructure as Code (IaC). What We Offer Competitive salary based on expertise and leadership capabilities. Opportunity to work on cutting-edge legal technology products used globally. Modern tech stack and high autonomy in solution design. Collaborative and innovative engineering culture. Professional growth through challenging projects and mentorship opportunities.
Posted 1 week ago
8.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Level AI was founded in 2019 and is a Series C startup headquartered in Mountain View, California. Level AI revolutionises customer engagement by transforming contact centres into strategic assets. Our AI-native platform leverages advanced technologies such as Large Language Models to extract deep insights from customer interactions. By providing actionable intelligence, Level AI empowers organisations to enhance customer experience and drive growth. Consistently updated with the latest AI innovations, Level AI stands as the most adaptive and forward-thinking solution in the industry. Position Overview: We seek an experienced Staff Software Engineer to lead the design and development of our data warehouse and analytics platform in addition to helping raise the engineering bar for the entire technology stack at Level AI, including applications, platform, and infrastructure. They will actively collaborate with team members and the wider Level AI engineering community to develop highly scalable and performant systems. They will be a technical thought leader who will help drive solving complex problems of today and the future by designing and building simple and elegant technical solutions. They will coach and mentor junior engineers and drive engineering best practices. They will actively collaborate with product managers and other stakeholders both inside and outside the team. What you’ll get to do at Level AI (and more as we grow together): Design, develop, and evolve data pipelines that ingest and process high-volume data from multiple external and internal sources Build scalable, fault-tolerant architectures for both batch and real-time data workflows using tools like GCP Pub/Sub, Kafka and Celery Define and maintain robust data models with a focus on domain-oriented design, supporting both operational and analytical workloads Architect and implement data lake/warehouse solutions using Postgres and Snowflake Lead the design and deployment of workflow orchestration using Apache Airflow for end-to-end pipeline automation Ensure platform reliability with strong monitoring, alerting, and observability for all data services and pipelines Collaborate closely with Other internal product & engineering teams to align data platform capabilities with product and business needs Own and enforce data quality, schema evolution, data contract practices, and governance standards Provide technical leadership, mentor junior engineers, and contribute to cross-functional architectural decisions We'd love to explore more about you if you have 8+ years of experience building large-scale data systems; preferably in high-ingestion, multi-source environments Strong system design, debugging, and performance tuning skills Strong programming skills in Python and Java Deep understanding of SQL (Postgres, MySQL) and data modeling (star/snowflake schema, normalization/denormalization) Hands-on experience with streaming platforms like Kafka and GCP Pub/Sub Expertise with Airflow or similar orchestration frameworks Solid experience with Snowflake, Postgres, and distributed storage design Familiarity with Celery for asynchronous task processing Comfortable working with ElasticSearch for data indexing and querying Exposure to Redash, Metabase, or similar BI/analytics tools Proven experience deploying solutions on cloud platforms like GCP or AWS Preferred Attributes- Experience with data governance and lineage tools. Demonstrated ability to handle scale, reliability, and incident response in data systems. Excellent communication and stakeholder management skills. Passion for mentoring and growing engineering talent. To learn more visit : https://thelevel.ai/ Funding : https://www.crunchbase.com/organization/level-ai LinkedIn : https://www.linkedin.com/company/level-ai/ Our AI platform : https://www.youtube.com/watch?v=g06q2V_kb-s Compensation: We offer market-leading compensation, based on the skills and aptitude of the candidate.
Posted 1 week ago
0 years
0 Lacs
Greater Vadodara Area
On-site
About Navaera Worldwide Navaera Worldwide is a global, full-service firm specializing in advanced knowledge management products and services that empower financial organizations to enhance operational efficiency, manage risk, detect fraud, and gain a competitive edge. With headquarters in New York and offices in Toronto (Canada) and Baroda (India), Navaera delivers complex, enterprise-grade business products and technology solutions to diverse clients worldwide. Role Overview As a Specialist Java Developer, you will be responsible for designing, developing, and maintaining high-performance, secure, and scalable Java-based applications. This role demands strong expertise in Java EE technologies, hands-on experience with industry-leading frameworks, and a proven ability to work on mission-critical enterprise applications, particularly within financial services. Key Responsibilities Application Development & Maintenance : Design, implement, and maintain enterprise-grade Java applications. Contribute across all phases of the software development lifecyclerequirement analysis, design, coding, testing, deployment, and maintenance. Ensure high performance, scalability, and availability of applications. Code Quality & Testing Write clean, testable, and efficient code following best practices and coding standards. Create and maintain automated unit and integration tests. Debug, troubleshoot, and optimize applications to resolve performance and functional issues. Documentation & Collaboration Maintain up-to-date technical documentation for code, APIs, and system architecture. Collaborate with cross-functional teams including Business Analysts, QA Engineers, and DevOps teams to deliver high-quality releases. Continuous Improvement Participate in architecture reviews and design discussions. Stay updated on emerging technologies and industry trends to recommend innovative solutions. Technical Skills Mandatory Core Java : Java 8+ with strong understanding of concurrency, memory management, and multithreading. Java EE Technologies Servlets, JSP, Web Services (SOAP & REST). Frameworks Spring MVC, Spring Integration, Hibernate ORM. Frontend Integration JavaScript, CSS, Bootstrap, and responsive UI design integration with backend services. Design Principles Strong grasp of Object-Oriented Design Principles, SOLID principles, and design patterns. Troubleshooting Excellent problem-solving skills and experience with performance profiling and debugging tools. Preferred Skills Integration Frameworks : ActiveMQ, Metro-based Web Services, Secure Socket Communications, Web Service Security. Application Servers Apache HTTP Server, Apache Tomcat, GlassFish. Search Frameworks Lucene, Solr, or Elasticsearch for high-speed search functionalities. Security & Compliance Experience implementing secure coding practices, encryption, and authentication/authorization frameworks. Additional Tools Familiarity with build tools (Maven/Gradle), version control (Git), and CI/CD pipelines (Jenkins). Qualifications Bachelors or Masters degree in Computer Science, Information Technology, or a related field. Proven track record of delivering enterprise-grade applications. Strong analytical, debugging, and communication skills. (ref:hirist.tech)
Posted 1 week ago
10.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Technical Product Manager As a Technical Product Manager (TPM) for our internal Observability & Insights Platform, you will be responsible for defining the product strategy, owning discovery and delivery, and ensuring our engineers and stakeholders across 350+ services can build, debug, and operate confidently. You will own and evolve a platform that includes logging (ELK stack), metrics (Prometheus, Grafana, Thanos), tracing (Jaeger), structured audit logs, and SIEM integrations, while competing with high-cost solutions like Datadog and Honeycomb. Your impact will be both technical and strategic, improving developer experience, reducing operational noise, and driving platform efficiency and cost visibility. Key Deliverables (Quarterly Outcomes) Successfully manage and deliver initiatives from the Observability Roadmap / Job Jar, tracked via RAG status and Jira epics. Complete structured discoveries for upcoming capabilities (e.g., SIEM exporter, SDK adoption, trace sampling). Design and roll out scorecards (in Port) to measure observability maturity across teams. Ensure feature parity and stakeholder migration in cost-saving initiatives (e.g., Datadog , Prometheus). Track and report platform usage, reliability, and cost metrics aligned to business outcomes. Drive feature documentation, adoption plans, and enablement sessions across engineering. Jobs To Be Done Define and evolve the observability product roadmap (Logs, Metrics, Traces, SDK, Dashboards, SIEM). Lead dual-track agile product discovery for upcoming initiatives gather context, define problem, validate feasibility. Partner with engineering managers to break down initiatives into quarterly deliverables, epics, and sprint-level execution. Maintain the Observability Job Jar and present RAG status every 2 weeks with confidence backed by Jira hygiene. Define and track metrics to measure success of every platform capability (SLOs, cost savings, adoption %, etc). Work closely with FinOps, Security, and Platform teams to ensure observability aligns with cost, compliance, and operational goals. Champion the adoption of SDKs, scorecards, and dashboards via enablement, documentation, and evangelism. Ways Of Working Work in dual-track agile : Discover next quarters priorities while delivering this quarters committed outcomes. Maintain a GPS PRD (Product Requirements Doc) for each major initiative : What problem are we solving? Why now? How do we measure value? Collaborate deeply with engineers in backlog grooming, planning, demos, and retrospectives. Follow RAG-based reporting with stakeholders: escalate risks early, present mitigation paths clearly. Operate with full visibility in Jira (Initiative , Epics , Stories , Subtasks), driving delivery rhythm across sprints. Use quarterly Job Jar reviews to recalibrate product priorities, staffing needs, and stakeholder alignment. You Should Have 10+ years of product management experience, ideally in platform/infrastructure products. Proven success managing internal developer platforms or observability tooling. Experience launching or migrating enterprise-scale telemetry stacks (e.g., Datadog , Prometheus/Grafana, Honeycomb , Jaeger). Ability to break down complex engineering requirements into structured product plans with measurable outcomes. Strong technical grounding in cloud-native environments (EKS, Kafka, Elasticsearch, etc). Excellent documentation and storytelling skills especially to influence engineers and non-technical stakeholders. Success Metrics Reduction in Datadog/Honeycomb usage & cost post migration. Uptime & latency of observability pipelines (Jaeger, ELK, Prometheus). Scorecard improvement across teams (Bronze , Silver , Gold). Number of issues detected/resolved using the new observability stack. Time to incident triage with new tracing/logging capabilities. (ref:hirist.tech)
Posted 1 week ago
6.0 years
0 Lacs
Kolkata, West Bengal, India
On-site
Location: Kolkata Experience Required: 6 to 8+ Employment Type: Full-time CTC: 8 to 14 LPA About Company: At Gintaa, were redefining how Indians order food. With our focus on affordability, exclusive restaurant partnerships, and hyperlocal logistics, we aim to scale across India's Tier 1 and Tier 2 cities. Were backed by a mission-driven team and expanding rapidly – now’s the time to join the core tech leadership and build something impactful from the ground up. Job Description : We are seeking a talented and experienced Mid-Senior Level Software Engineer (Backend) to join our dynamic team. The ideal candidate will have strong expertise in backend technologies, microservices architecture, and cloud environments. You will be responsible for designing, developing, and maintaining high-performance backend systems to support scalable applications. Responsibilities: Design, develop, and maintain robust, scalable, and secure backend services and APIs. Work extensively with Java, Spring Boot, Spring MVC, Hibernate to build and optimize backend applications. Develop and manage microservices-based architectures. Implement and optimize RDBMS (MySQL, PostgreSQL) and NoSQL (MongoDB, Cassandra, etc.) solutions. Build and maintain RESTful services for seamless integration with frontend and third-party applications. Basic understanding of Node.js and Python is a bonus. Ability to learn and work with new technologies. Optimize system performance, security, and scalability. Deploy and manage applications in cloud environments (AWS, GCP, or Azure). Collaborate with cross-functional teams including frontend engineers, DevOps, and product teams. Convert business requirements into technical development items using critical thinking and analysis. Lead a team and manage activities, including task distribution. Write clean, maintainable, and efficient code following best practices. Participate in code reviews, technical discussions, and contribute to architectural decisions. Required Skills: 6+ years of experience in backend development with Java and Spring framework (Spring Boot, Spring MVC). Strong knowledge of Hibernate (ORM) and database design principles. Hands-on experience with Microservices architecture and RESTful API development. Proficiency in RDBMS (MySQL, PostgreSQL) and NoSQL databases (MongoDB, Cassandra, etc.). Experience with cloud platforms such as AWS, GCP, or Azure. Experience with Kafka or equivalent tool for messaging and stream processing. Basic knowledge of Node.js for backend services and APIs. Proven track record of working in fast-paced, Agile/Scrum methodology. Proficient with Git. Familiarity with IDE tools such as Intellij and VS Code. Strong problem-solving and debugging skills. Understanding of system security, authentication and authorization best practices. Excellent communication and collaboration skills. Preferred Skills (Nice to Have): Experience with Elasticsearch for search and analytics. Familiarity with Firebase tools for real-time database, firestore, authentication, and notifications. Hands-on experience with Google Cloud Platform (GCP) services. Hands-on experience of working with Node.js and Python. Exposure to containerization and orchestration tools like Docker and Kubernetes. Experience in CI/CD pipelines and basic DevOps practices.
Posted 1 week ago
5.0 - 8.0 years
0 Lacs
Bengaluru, Karnataka, India
Remote
About This Role At BlackRock, technology has always been at the core of what we do – and today, our technologists continue to shape the future of the industry with their innovative work. We are not only curious but also collaborative and eager to embrace experimentation as a means to solve complex challenges. Here you’ll find an environment that promotes working across teams, businesses, regions and specialties – and a firm committed to supporting your growth as a technologist through curated learning opportunities, tech-specific career paths, and access to experts and leaders around the world. We are seeking a highly skilled and motivated Senior level Data Engineer to join the Private Market Data Engineering team within Aladdin Data at BlackRock for driving our Private Market Data Engineering vision of making private markets more accessible and transparent for clients. In this role, you will work multi-functionally with Product, Data Research, Engineering, and Program management. Engineers looking to work in the areas of orchestration, data modeling, data pipelines, APIs, storage, distribution, distributed computation, consumption and infrastructure are ideal candidates. The candidate will have extensive experience in developing data pipelines using Python, Java, Apache Airflow orchestration platform, DBT (Data Build Tool), Great Expectations for data validation, Apache Spark, MongoDB, Elasticsearch, Snowflake and PostgreSQL. In this role, you will be responsible for designing, developing, and maintaining robust and scalable data pipelines. You will collaborate with various stakeholders to ensure the data pipelines are efficient, reliable, and meet the needs of the business. Key Responsibilities Design, develop, and maintain data pipelines using Aladdin Data Enterprise Data Platform framework Develop ETL/ELT data pipelines using Python, SQL and deploy them as containerized apps on a Kubernetes cluster Develop API for data distribution on top of the standard data model of the Enterprise Data Platform Design and develop optimized back-end services in Java / Python for APIs to handle faster data retrieval and optimized processing Develop reusable back-end services for data pipeline processing in Python / Java Develop data transformation using DBT (Data Build Tool) with SQL or Python Ensure data quality and integrity through automated testing and validation using tools like Great Expectations Implement all observability requirements in the data pipeline Optimize data workflows for performance and scalability Monitor and troubleshoot data pipeline issues, ensuring timely resolution Document data engineering processes and best practices whenever required Required Skills And Qualifications Must have 5 to 8 years of experience in data engineering, with a focus on building data pipelines and Data Services APIs Strong server-side programming skills in Python and/or Java.Experience working with backend microservices and APIs using Java and/or Python Experience with Apache Airflow or any other orchestration framework for data orchestration Proficiency in DBT for data transformation and modeling Experience with data quality validation tools like Great Expectations or any other similar tools Strong at writing SQL and experience with relational databases like SQL Server, PostgreSQL Experience with cloud-based data warehouse platform like Snowflake Experience working on NoSQL databases like Elasticsearch and MongoDB Experience working with container orchestration platform like Kubernetes on AWS and/or Azure cloud environments Experience on Cloud platforms like AWS and/or Azure Ability to work collaboratively in a team environment Need to possess critical skills of being detail oriented, passion to learn new technologies and good analytical and problem-solving skills Experience with Financial Services application is a plus Effective communication skills, both written and verbal Bachelor’s or Master’s degree in computer science, Engineering, or a related field Our Benefits To help you stay energized, engaged and inspired, we offer a wide range of benefits including a strong retirement plan, tuition reimbursement, comprehensive healthcare, support for working parents and Flexible Time Off (FTO) so you can relax, recharge and be there for the people you care about. Our hybrid work model BlackRock’s hybrid work model is designed to enable a culture of collaboration and apprenticeship that enriches the experience of our employees, while supporting flexibility for all. Employees are currently required to work at least 4 days in the office per week, with the flexibility to work from home 1 day a week. Some business groups may require more time in the office due to their roles and responsibilities. We remain focused on increasing the impactful moments that arise when we work together in person – aligned with our commitment to performance and innovation. As a new joiner, you can count on this hybrid model to accelerate your learning and onboarding experience here at BlackRock. About BlackRock At BlackRock, we are all connected by one mission: to help more and more people experience financial well-being. Our clients, and the people they serve, are saving for retirement, paying for their children’s educations, buying homes and starting businesses. Their investments also help to strengthen the global economy: support businesses small and large; finance infrastructure projects that connect and power cities; and facilitate innovations that drive progress. This mission would not be possible without our smartest investment – the one we make in our employees. It’s why we’re dedicated to creating an environment where our colleagues feel welcomed, valued and supported with networks, benefits and development opportunities to help them thrive. For additional information on BlackRock, please visit @blackrock | Twitter: @blackrock | LinkedIn: www.linkedin.com/company/blackrock BlackRock is proud to be an Equal Opportunity Employer. We evaluate qualified applicants without regard to age, disability, family status, gender identity, race, religion, sex, sexual orientation and other protected attributes at law.
Posted 1 week ago
8.0 years
0 Lacs
Mumbai, Maharashtra, India
On-site
Who We Are At Kyndryl, we design, build, manage and modernize the mission-critical technology systems that the world depends on every day. So why work at Kyndryl? We are always moving forward – always pushing ourselves to go further in our efforts to build a more equitable, inclusive world for our employees, our customers and our communities. The Role Job Description: As ELK architect (Elasticsearch, Logstash, and Kibana), you will be responsible for designing and implementing the data architecture and infrastructure for data analytics, log management, and visualization solutions using the ELK stack. You will collaborate with cross-functional teams, including data engineers, developers, system administrators, and stakeholders, to define data requirements, design data models, and ensure efficient data processing, storage, and retrieval. Your expertise in ELK and data architecture will be instrumental in building scalable and performant data solutions. Responsibilities: Data Architecture Design: Collaborate with stakeholders to understand business requirements and define the data architecture strategy for ELK-based solutions. Design scalable and robust data models, data flows, and data integration patterns. ELK Stack Implementation: Lead the implementation and configuration of ELK stack infrastructure to support data ingestion, processing, indexing, and visualization. Ensure high availability, fault tolerance, and optimal performance of the ELK environment. Data Ingestion and Integration: Design and implement efficient data ingestion pipelines using Logstash or other relevant technologies. Integrate data from various sources, such as databases, APIs, logs, AppDynamics, storage and streaming platforms, into ELK for real-time and batch processing. Data Modeling and Indexing: Design and optimize Elasticsearch indices and mappings to enable fast and accurate search and analysis. Define index templates, shard configurations, and document structures to ensure efficient storage and retrieval of data. Data Visualization and Reporting: Collaborate with stakeholders to understand data visualization and reporting requirements. Utilize Kibana to design and develop visually appealing and interactive dashboards, reports, and visualizations that enable data-driven decision-making. Performance Optimization: Analyze and optimize the performance of data processing and retrieval in ELK. Tune Elasticsearch settings, queries, and aggregations to improve search speed and response time. Optimize data storage, caching, and memory management. Data Security and Compliance: Implement security measures and access controls to protect sensitive data stored in ELK. Ensure compliance with data privacy regulations and industry standards by implementing appropriate encryption, access controls, and auditing mechanisms. Documentation and Collaboration: Create and maintain documentation of data models, data flows, system configurations, and best practices. Collaborate with cross-functional teams, providing guidance and support on data architecture and ELK-related topics Who You Are Candidate should have minimum 8+ years of experience. Apply Architectural Methods. Design Information System Architecture. Lead Systems Engineering Management. AD & AI leadership. Being You Diversity is a whole lot more than what we look like or where we come from, it’s how we think and who we are. We welcome people of all cultures, backgrounds, and experiences. But we’re not doing it single-handily: Our Kyndryl Inclusion Networks are only one of many ways we create a workplace where all Kyndryls can find and provide support and advice. This dedication to welcoming everyone into our company means that Kyndryl gives you – and everyone next to you – the ability to bring your whole self to work, individually and collectively, and support the activation of our equitable culture. That’s the Kyndryl Way. What You Can Expect With state-of-the-art resources and Fortune 100 clients, every day is an opportunity to innovate, build new capabilities, new relationships, new processes, and new value. Kyndryl cares about your well-being and prides itself on offering benefits that give you choice, reflect the diversity of our employees and support you and your family through the moments that matter – wherever you are in your life journey. Our employee learning programs give you access to the best learning in the industry to receive certifications, including Microsoft, Google, Amazon, Skillsoft, and many more. Through our company-wide volunteering and giving platform, you can donate, start fundraisers, volunteer, and search over 2 million non-profit organizations. At Kyndryl, we invest heavily in you, we want you to succeed so that together, we will all succeed. Get Referred! If you know someone that works at Kyndryl, when asked ‘How Did You Hear About Us’ during the application process, select ‘Employee Referral’ and enter your contact's Kyndryl email address.
Posted 1 week ago
5.0 years
5 - 10 Lacs
Hyderābād
On-site
DESCRIPTION The AOP (Analytics Operations and Programs) team is responsible for creating core analytics, insight generation and science capabilities for ROW Ops. We develop scalable analytics applications, AI/ML products and research models to optimize operation processes. You will work with Product Managers, Data Engineers, Data Scientists, Research Scientists, Applied Scientists and Business Intelligence Engineers using rigorous quantitative approaches to ensure high quality data/science products for our customers around the world. We are looking for a Sr.Data Scientist to join our growing Science Team. As Data Scientist, you are able to use a range of science methodologies to solve challenging business problems when the solution is unclear. You will be responsible for building ML models to solve complex business problems and test them in production environment. The scope of role includes defining the charter for the project and proposing solutions which align with org's priorities and production constraints but still create impact. You will achieve this by leveraging strong leadership and communication skills, data science skills and by acquiring domain knowledge pertaining to the delivery operations systems. You will provide ML thought leadership to technical and business leaders, and possess ability to think strategically about business, product, and technical challenges. You will also be expected to contribute to the science community by participating in science reviews and publishing in internal or external ML conferences. Our team solves a broad range of problems that can be scaled across ROW (Rest of the World including countries like India, Australia, Singapore, MENA and LATAM). Here is a glimpse of the problems that this team deals with on a regular basis: Using live package and truck signals to adjust truck capacities in real-time HOTW models for Last Mile Channel Allocation Using LLMs to automate analytical processes and insight generation Ops research to optimize middle mile truck routes Working with global partner science teams to affect Reinforcement Learning based pricing models and estimating Shipments Per Route for $MM savings Deep Learning models to synthesize attributes of addresses Abuse detection models to reduce network losses Key job responsibilities 1. Use machine learning and analytical techniques to create scalable solutions for business problems Analyze and extract relevant information from large amounts of Amazon’s historical business data to help automate and optimize key processes 2. Design, develop, evaluate and deploy, innovative and highly scalable ML/OR models 3. Work closely with other science and engineering teams to drive real-time model implementations 4. Work closely with Ops/Product partners to identify problems and propose machine learning solutions 5. Establish scalable, efficient, automated processes for large scale data analyses, model development, model validation and model maintenance 6. Work proactively with engineering teams and product managers to evangelize new algorithms and drive the implementation of large-scale complex ML models in production 7. Leading projects and mentoring other scientists, engineers in the use of ML techniques BASIC QUALIFICATIONS 5+ years of data scientist experience Experience with data scripting languages (e.g. SQL, Python, R etc.) or statistical/mathematical software (e.g. R, SAS, or Matlab) Experience with statistical models e.g. multinomial logistic regression Experience in data applications using large scale distributed systems (e.g., EMR, Spark, Elasticsearch, Hadoop, Pig, and Hive) Experience working with data engineers and business intelligence engineers collaboratively Demonstrated expertise in a wide range of ML techniques PREFERRED QUALIFICATIONS Experience as a leader and mentor on a data science team Master's degree in a quantitative field such as statistics, mathematics, data science, business analytics, economics, finance, engineering, or computer science Expertise in Reinforcement Learning and Gen AI is preferred Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner.
Posted 1 week ago
5.0 years
5 - 10 Lacs
Gurgaon
On-site
DESCRIPTION The AOP (Analytics Operations and Programs) team is responsible for creating core analytics, insight generation and science capabilities for ROW Ops. We develop scalable analytics applications, AI/ML products and research models to optimize operation processes. You will work with Product Managers, Data Engineers, Data Scientists, Research Scientists, Applied Scientists and Business Intelligence Engineers using rigorous quantitative approaches to ensure high quality data/science products for our customers around the world. We are looking for a Sr.Data Scientist to join our growing Science Team. As Data Scientist, you are able to use a range of science methodologies to solve challenging business problems when the solution is unclear. You will be responsible for building ML models to solve complex business problems and test them in production environment. The scope of role includes defining the charter for the project and proposing solutions which align with org's priorities and production constraints but still create impact. You will achieve this by leveraging strong leadership and communication skills, data science skills and by acquiring domain knowledge pertaining to the delivery operations systems. You will provide ML thought leadership to technical and business leaders, and possess ability to think strategically about business, product, and technical challenges. You will also be expected to contribute to the science community by participating in science reviews and publishing in internal or external ML conferences. Our team solves a broad range of problems that can be scaled across ROW (Rest of the World including countries like India, Australia, Singapore, MENA and LATAM). Here is a glimpse of the problems that this team deals with on a regular basis: Using live package and truck signals to adjust truck capacities in real-time HOTW models for Last Mile Channel Allocation Using LLMs to automate analytical processes and insight generation Ops research to optimize middle mile truck routes Working with global partner science teams to affect Reinforcement Learning based pricing models and estimating Shipments Per Route for $MM savings Deep Learning models to synthesize attributes of addresses Abuse detection models to reduce network losses Key job responsibilities 1. Use machine learning and analytical techniques to create scalable solutions for business problems Analyze and extract relevant information from large amounts of Amazon’s historical business data to help automate and optimize key processes 2. Design, develop, evaluate and deploy, innovative and highly scalable ML/OR models 3. Work closely with other science and engineering teams to drive real-time model implementations 4. Work closely with Ops/Product partners to identify problems and propose machine learning solutions 5. Establish scalable, efficient, automated processes for large scale data analyses, model development, model validation and model maintenance 6. Work proactively with engineering teams and product managers to evangelize new algorithms and drive the implementation of large-scale complex ML models in production 7. Leading projects and mentoring other scientists, engineers in the use of ML techniques BASIC QUALIFICATIONS 5+ years of data scientist experience Experience with data scripting languages (e.g. SQL, Python, R etc.) or statistical/mathematical software (e.g. R, SAS, or Matlab) Experience with statistical models e.g. multinomial logistic regression Experience in data applications using large scale distributed systems (e.g., EMR, Spark, Elasticsearch, Hadoop, Pig, and Hive) Experience working with data engineers and business intelligence engineers collaboratively Demonstrated expertise in a wide range of ML techniques PREFERRED QUALIFICATIONS Experience as a leader and mentor on a data science team Master's degree in a quantitative field such as statistics, mathematics, data science, business analytics, economics, finance, engineering, or computer science Expertise in Reinforcement Learning and Gen AI is preferred Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner.
Posted 1 week ago
2.0 - 3.0 years
5 - 7 Lacs
Noida
On-site
Senior Executive EXL/SE/1435682 Digital SolutionsNoida Posted On 30 Jul 2025 End Date 13 Sep 2025 Required Experience 2 - 3 Years Basic Section Number Of Positions 3 Band A2 Band Name Senior Executive Cost Code G090529 Campus/Non Campus NON CAMPUS Employment Type Permanent Requisition Type New Max CTC 500000.0000 - 700000.0000 Complexity Level Not Applicable Work Type Hybrid – Working Partly From Home And Partly From Office Organisational Group EXL Digital Sub Group Digital Solutions Organization Digital Solutions LOB Digital Solutions SBU PayMentor Country India City Noida Center Noida - Centre 59 Skills Skill CUSTOMER SERVICE MONITORING ENGINEERS TROUBLESHOOTING COMPUTER TROUBLESHOOTER TECHNICAL DATA ANALYSIS Minimum Qualification B.TECH/BE BACHELOR OF SCIENCE (BSC) BCA Certification No data available Job Description Job Description – Digital Transformation- 24x7 Support Engineer Position Title, Responsibility Level Junior- Support Engineer Function - Digital Reports to AM/Manager Regular/Temporary: Regular Grade – A2 Location Noida, India Objectives of the Role: We are seeking an experienced 24x7 Support Engineer / IT Operations Engineer to ensure continuous and uninterrupted operation of critical IT systems and infrastructure by providing round-the-clock monitoring, support, and rapid incident response. The 24x7 Engineer plays a crucial role in minimizing downtime, maintaining service availability, and supporting business continuity for global operations. Responsibilities Monitor IT infrastructure, systems, and applications 24x7 to ensure optimal performance and availability. Respond to incidents, alerts, and outages in real-time and follow standard operating procedures for resolution or escalation. Perform root cause analysis and document incidents, resolutions, and preventive measures. Work in rotational shifts (including nights, weekends, and holidays) to provide continuous operational coverage. Coordinate with L2/L3 support, vendors, and other internal teams for advanced troubleshooting and escalation. Execute routine operational tasks such as system health checks, backup monitoring, batch job status validation, and performance tuning activities. Use monitoring tools (e.g., SolarWinds, Nagios, Splunk, Zabbix, SCOM) to proactively identify system issues. Maintain and update technical documentation, SOPs, and knowledge base articles for operational reference. Ensure adherence to SLAs, security standards, and compliance policies during all support activities. Participate in change management processes and support planned deployments or patching activities during off-business hours. Continuously improve system monitoring, alerting thresholds, and automation for routine tasks. Provide handover reports at the end of each shift to ensure seamless transition between teams. Skills Technical Skills Windows/Linux servers, cloud platforms (AWS/Azure), network devices, firewalls, databases (Oracle, SQL Server), enterprise applications, Active Directory, virtualization (VMware, Hyper-V), ServiceNow. Operating Systems Windows Server (2012/2016/2019/2022) Linux/Unix (Red Hat, CentOS, Ubuntu) File system management, user/group management, basic scripting (Bash/PowerShell) Networking & Infrastructure TCP/IP, DNS, DHCP, HTTP/S, FTP, VPN Basic knowledge of routing, switching, firewalls Load Balancer configuration (F5, HAProxy, Nginx) Network troubleshooting using tools like ping, tracert, netstat, nslookup Monitoring & Logging Tools Splunk, ELK Stack (Elasticsearch, Logstash, Kibana) Cloud-native monitoring: CloudWatch, Azure Monitor, Datadog Security & Access Management Active Directory & Group Policy Role-based access control, IAM (on cloud) Antivirus/endpoint security monitoring Soft skills (Desired) Strong analytical and problem-solving skills Excellent communication and teamwork Ability to work under pressure and in high-stakes environments Attention to detail and commitment to service excellence Education Requirements B.E. / B.Tech / B.Sc. / BCA ServiceNow Certified System Administrator (for roles involving ITSM tools) Work Experience Requirements: Must Have: Minimum 2 years of relevant experience as 24x7 Support Engineer / IT Operations Engineer in the IT industry. Provided round-the-clock L1/L2 support for critical enterprise applications across multiple environments (Production, UAT, Dev). Diagnosed and resolved incidents, service requests, and user-reported issues within SLA timelines using ServiceNow and Jira Service Desk. Performed log analysis and initial triage for application errors, server issues, job failures, and performance bottlenecks. Escalated complex issues to development, infrastructure, or database teams with well-documented analysis and RCA inputs. Conducted shift handovers with detailed summaries of open incidents, action items, and follow-ups. Participated in weekly deployments, release validations, and post-deployment support for major rollouts. Performed health checks, backups, patch validation, and maintenance tasks during off-business hours. Maintained knowledge base articles, SOPs, and known error documents for recurring issues and L1 resolution. Preferred Skills: Proficiency in tools like Splunk, AppDynamics, Dynatrace, or ELK Stack for real-time alerting and performance diagnostics Ability to write simple PowerShell, Bash, or Python scripts to automate routine tasks, log parsing, or health checks. Workflow Workflow Type Digital Solution Center
Posted 1 week ago
0 years
0 Lacs
Greater Delhi Area
Remote
About Tide At Tide, we are building a business management platform designed to save small businesses time and money. We provide our members with business accounts and related banking services, but also a comprehensive set of connected administrative solutions from invoicing to accounting. Launched in 2017, Tide is now used by over 1 million small businesses across the world and is available to UK, Indian and German SMEs. Headquartered in central London, with offices in Sofia, Hyderabad, Delhi, Berlin and Belgrade, Tide employs over 2,000 employees. Tide is rapidly growing, expanding into new products and markets and always looking for passionate and driven people. Join us in our mission to empower small businesses and help them save time and money. About The Team Our 40+ engineering teams are working on designing, creating and running the rich product catalogue across our business areas (e.g. Payments Services, Business Services). We have a long roadmap ahead of us and always have interesting problems to tackle. We trust and empower our engineers to make real technical decisions that affect multiple teams and shape the future of Tide’s Global One Platform. It’s an exceptional opportunity to make a real difference by taking ownership of engineering practices in a rapidly expanding company! We work in small autonomous teams, grouped under common domains owning the full lifecycle of some microservices in Tide’s service catalogue. Our engineers self-organize, gather together to discuss technical challenges, and set their own guidelines in the different Communities of Practice regardless of where they currently stand in our Growth Framework. About The Role Contribute to our event-driven Microservice Architecture (currently 200+ services owned by 40+ teams). You will define and maintain the services your team owns (you design it, you build it, you run it, you scale it globally) Use Java 17, Spring Boot and JOOQ to build your services. Expose and consume RESTful APIs. We value good API design and we treat our APIs as Products (in the world of Open Banking often times they are gonna be public!) Use SNS+SQS and Kafka to send events Utilise PostgreSQL via Aurora as your primary datastore (we are heavy AWS users) Deploy your services to Production as often as you need to (this usually means multiple times per day!). This is enabled by our CI/CD pipelines powered by GitHub with GitHub actions, and solid JUnit/Pact testing (new joiners are encouraged to have something deployed to production in their first 2 weeks) Experience modern GitOps using ArgoCD. Our Cloud team uses Docker, Terraform, EKS/Kubernetes to run the platform. Have DataDog as your best friend to monitor your services and investigate issues Collaborate closely with Product Owners to understand our Users’ needs, Business opportunities and Regulatory requirements and translate them into well-engineered solutions What We Are Looking For Have some experience building server-side applications and detailed knowledge of the relevant programming languages for your stack. You don’t need to know Java, but bear in mind that most of our services are written in Java, so you need to be willing to learn it when you have to change something there! Have a sound knowledge of a backend framework (e.g. Spring/Spring Boot) that you’ve used to write microservices that expose and consume RESTful APIs Have experience engineering scalable and reliable solutions in a cloud-native environment (the most important thing for us is understanding the fundamentals of CI/CD, practical Agile so to speak) Demonstrate a mindset of delivering secure, well-tested and well-documented software that integrates with various third party providers and partners (we do that a lot in the fintech industry) Our Tech Stack Java 17, Spring Boot and JOOQ to build the RESTful APIs of our microservices Event-driven architecture with messages over SNS+SQS and Kafka to make them reliable Primary datastores are MySQL and PostgreSQL via RDS or Aurora (we are heavy AWS users) Docker, Terraform, EKS/Kubernetes used by the Cloud team to run the platform DataDog, ElasticSearch/Fluentd/Kibana and Rollbar to keep it running GitHub with GitHub actions for Sonarcloud, Snyk and solid JUnit/Pact testing to power the CI/CD pipelines What You Will Get In Return Competitive salary Self & Family Health Insurance Term & Life Insurance OPD Benefits Mental wellbeing through Plumm Learning & Development Budget WFH Setup allowance 25 Annual leaves Family & Friendly Leaves Tidean Ways Of Working At Tide, we champion a flexible workplace model that supports both in-person and remote work to cater to the specific needs of our different teams. While remote work is supported, we believe in the power of face-to-face interactions to foster team spirit and collaboration. Our offices are designed as hubs for innovation and team-building, where we encourage regular in-person gatherings to foster a strong sense of community. TIDE IS A PLACE FOR EVERYONE At Tide, we believe that we can only succeed if we let our differences enrich our culture. Our Tideans come from a variety of backgrounds and experience levels. We consider everyone irrespective of their ethnicity, religion, sexual orientation, gender identity, family or parental status, national origin, veteran, neurodiversity or differently-abled status. We celebrate diversity in our workforce as a cornerstone of our success. Our commitment to a broad spectrum of ideas and backgrounds is what enables us to build products that resonate with our members’ diverse needs and lives. We are One Team and foster a transparent and inclusive environment, where everyone’s voice is heard. At Tide, we thrive on diversity, embracing various backgrounds and experiences. We welcome all individuals regardless of ethnicity, religion, sexual orientation, gender identity, or disability. Our inclusive culture is key to our success, helping us build products that meet our members' diverse needs. We are One Team, committed to transparency and ensuring everyone’s voice is heard. You personal data will be processed by Tide for recruitment purposes and in accordance with Tide's Recruitment Privacy Notice .
Posted 1 week ago
10.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Today, NVIDIA is tapping into the unlimited potential of AI to define the next era of computing. An era in which our GPU acts as the brains of computers, robots, and self-driving cars that can understand the world. Doing what’s never been done before takes vision, innovation, and the world’s best talent. As an NVIDIAN, you’ll be immersed in a diverse, encouraging environment where everyone is inspired to do their best work. Come join the team and see how we can make a lasting impact on the world. NVIDIA is hiring senior software engineers in its Infrastructure, Planning and Process Team (IPP), to accelerate AI adoption across various engineering workflows within the company. IPP is a global organization within NVIDIA. The group works with various other teams within NVIDIA such as Graphics Processors, Mobile Processors, Deep Learning, Artificial Intelligence and Driverless Cars to cater to their infrastructure and software development workflow needs. As a senior engineer on AI Workflow, you will design and implement tools and software solutions that leverage Large Language Models and agentic AI to automate end to end software engineering workflows and enhance the productivity of engineers across NVIDIA. What You’ll Be Doing Design and implement AI-driven solutions across software development lifecycles to enhance developer productivity, accelerate feedback loops, and improve reliability of releases Experience designing, developing, and deploying AI agents to automate software development workflows and processes. Continuously measure and report on the impact of AI interventions, demonstrating improvements in key metrics like cycle time, change failure rate, and mean time to recovery (MTTR). Build and deploy predictive models to identify high-risk commits, forecast potential build failures, and flag changes that have a high probability of failures. Research emerging AI technologies and engineering best practices to continuously evolve our development ecosystem and maintain a competitive edge. What We Need To See BE (MS preferred) or equivalent experience in EE/CS with 10+ years of work experience. Deep practical knowledge of Large Language Models (LLMs), Machine Learning (ML), and Agent development. Hands-on experience implementing AI solutions to solve real-world software engineering problems. Hands-on experience on Python/Java/Go with extensive python scripting experience. Experience in working with SQL/NoSQL database systems such as MySQL, MongoDB or Elasticsearch. Full-stack, end-to-end development expertise, with proficiency in building and integrating solutions from the front-end (e.g., React, Angular) to the back-end (Python, Go, Java) and managing data infrastructure (SQL/NoSQL). Experience with tools for CI/CD setup such as Jenkins, Gitlab CI, Packer, Terraform, Artifactory, Ansible, Chef or similar tools. Good understanding of distributed systems, understanding of microservice architecture and REST APIs. Ability to effectively work across organizational boundaries to enhance alignment and productivity between teams. Ways To Stand Out From The Crowd Proven expertise in applied AI, particularly using Retrieval-Augmented Generation (RAG) and fine-tuning LLMs on enterprise data to solve complex software engineering challenges. Experience delivering large-scale, service-oriented software projects under real-time constraints, demonstrating an understanding of the complex development environments this role will optimize. Expertise in leveraging large language models (LLMs) and Agentic AI to automate complex workflows. Expertise in agents development using latest AI technologies and a deep understanding of agentic workflows. We have some of the most forward-thinking and versatile people in the world working for us and, due to unprecedented growth, our best-in-class engineering teams are rapidly growing. We are building a team that will truly change the world. If you are passionate about new technologies, care about software quality, and want to be part of the future of transportation and AI, we would love for you to join us. JR2001060
Posted 1 week ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
54024 Jobs | Dublin
Wipro
24262 Jobs | Bengaluru
Accenture in India
18733 Jobs | Dublin 2
EY
17079 Jobs | London
Uplers
12548 Jobs | Ahmedabad
IBM
11704 Jobs | Armonk
Amazon
11059 Jobs | Seattle,WA
Bajaj Finserv
10656 Jobs |
Accenture services Pvt Ltd
10587 Jobs |
Oracle
10506 Jobs | Redwood City