Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
5.0 - 9.0 years
0 Lacs
karnataka
On-site
As a Cloud Application Developer at our organization, you will have the opportunity to help design, develop, and maintain robust cloud-native applications in an as-a-service model on a Cloud platform. Your responsibilities will include evaluating, implementing, and standardizing new tools and solutions to continuously improve the Cloud Platform. You will leverage your expertise to drive the organization and departments" technical vision in development teams. Additionally, you will liaise with global and local stakeholders to influence technical roadmaps and passionately contribute towards hosting a thriving developer community. Encouraging contribution towards inner and open-sourcing will be a key aspect of your role. To excel in this position, you should have experience and exposure to good programming practices, including coding and testing standards. Your passion and experience in proactively investigating, evaluating, and implementing new technical solutions with continuous improvement will be highly valued. Possessing a good development culture and familiarity with industry-wide best practices is essential. A production mindset with a keen focus on reliability and quality is crucial, along with a passion for being part of a distributed, self-sufficient feature team with regular deliverables. You should be a proactive learner and continuously enhance your skills in areas such as Scrum, Data, and Automation. Your strong technical ability to monitor, investigate, analyze, and fix production issues will be essential in this role. You should also have the ability to ideate and collaborate through inner and open-sourcing and interact effectively with client managers, developers, testers, and cross-functional teams like architects. Experience working in an Agile Team and exposure to Agile/SAFE development methodologies is required, along with a minimum of 5+ years of experience in software development and architecture. In terms of technical skills, you should have good experience in design and development, including object-oriented programming in Python, cloud-native application development, APIs, and microservices. Familiarity with relational databases like PostgreSQL and the ability to build robust SQL queries is essential. Knowledge of tools such as Grafana for data visualization, Elastic search, FluentD, and hosting applications using Containerization (Docker, Kubernetes) will be beneficial. Proficiency with CI/CD and DevOps tools like GIT, Jenkins, Sonar, good system skills with Linux OS, and bash scripting are also required. An understanding of the Cloud and cloud services is a must-have skill for this role. Joining our team means being part of a company that values people as drivers of change and believes in making a positive impact on the future. We encourage creating, daring, innovating, and taking action. Our employees have the opportunity to engage in solidarity actions and contribute to reducing the carbon footprint through sustainable practices. Diversity and inclusion are core values that we uphold, and we are committed to ESG practices. If you are looking to be directly involved, grow in a stimulating environment, and make a difference, you will find a home with us.,
Posted 3 weeks ago
0 years
35 - 55 Lacs
Hyderabad, Telangana, India
On-site
Company: NxtHyre Website: Visit Website Business Type: Startup Company Type: Product Business Model: B2B Funding Stage: Series B Industry: Artificial Intelligence Salary Range: ₹ 35-55 Lacs PA Job Description: This is Permanent role with a Valued client of NxtHyre in Fintech. About Client: Founded in 2019, we recently raised our Series B fundraise, led by Innovius Capital along with participation from Dell Technologies Capital, Sentinel Global, and existing investors including Venrock, NeoTribe Ventures, Engineering Capital, Workday Ventures, and KPMG Ventures. Job Description: Looking for strong candidates with a passion for participating in our mission. Areas of responsibilities (subject to change over time) Responsibilities: Developing and managing data pipelines for ML and analytics Effectively analyze and resolve engineering issues as they arise Implementing ML algorithms to classify textual categorization and information extraction Writing containerized microservices for serving the model in a production environment Writing unit tests alongside development Must-haves Python programming expertise: data structures, OOP, recursions, generators, iterators, decorators, familiarity with regular expressions Working knowledge and experience with deep learning framework Pytorch or Tensorflow. Embedding representations Familiarity with SQL database interactions Familiarity with Elasticsearch document indexing, querying Familiarity with Docker, Dockerfile Familiarity with REST API, JSON structure. Python packages like FastAPI Familiarity with git operations Familiarity with shell scripting Familiarity with PyCharm for development, debugging, profiling Experience with Kubernetes Desired NLP toolkits like NLTK, spaCy, Gensim, scikit-learn. Familiarity with basic natural language concepts, handling, Tokenization, lemmatization, stemming, edit distances, named entity recognition, syntactic parsing, etc Good knowledge and experience with deep learning framework Pytorch or Tensorflow More complex operations with Elasticsearch. Creating indices, indexable fields, etc Good experience with Kubernetes
Posted 3 weeks ago
5.0 years
0 Lacs
India
On-site
Front-End Developer - Credly & Faethm As a Front-End Developer working on Credly and Faethm, you will play a key role in designing and delivering exceptional user experiences across our web applications. Working closely with product teams and other stakeholders, you will use modern React.js libraries, frameworks, and development patterns to build responsive, accessible, and maintainable interfaces. Your contributions will involve everything from architecting scalable front-end solutions to integrating APIs, optimizing performance, and guiding a small team to produce high-quality, user-focused software. Minimum Requirements 5+ years of professional front-end development experience Proficiency in ES6, TypeScript, React.js, Redux, Node.js, HTML, CSS Strong knowledge of modular, maintainable, and scalable front-end architectures Experience with front-end performance optimization Familiarity with micro frontends and modern JavaScript design patterns Hands-on experience integrating RESTful services Familiarity with Postgresql and ElasticSearch Responsibilities Architect, develop, and maintain scalable front-end solutions using React.js and related technologies Guide and mentor team members with technical best practices Ensure usability, accessibility, and performance standards are met Strategize and build reusable code libraries, tools, and frameworks Integrate and optimize third-party APIs (e.g., authentication, LMS) Estimate, plan, and deliver features on schedule Collaborate with product teams and stakeholders to align on requirements and drive solutions Nice To Have Experience with Progressive Web Apps (PWAs) Experience with Ruby on Rails Who We Are At Pearson, our purpose is simple: to help people realize the life they imagine through learning. We believe that every learning opportunity is a chance for a personal breakthrough. We are the world's lifelong learning company. For us, learning isn't just what we do. It's who we are. To learn more: We are Pearson. Pearson is an Equal Opportunity Employer and a member of E-Verify. Employment decisions are based on qualifications, merit and business need. Qualified applicants will receive consideration for employment without regard to race, ethnicity, color, religion, sex, sexual orientation, gender identity, gender expression, age, national origin, protected veteran status, disability status or any other group protected by law. We actively seek qualified candidates who are protected veterans and individuals with disabilities as defined under VEVRAA and Section 503 of the Rehabilitation Act. If you are an individual with a disability and are unable or limited in your ability to use or access our career site as a result of your disability, you may request reasonable accommodations by emailing TalentExperienceGlobalTeam@grp.pearson.com. Job: Software Development Job Family: TECHNOLOGY Organization: Enterprise Learning & Skills Schedule: FULL_TIME Workplace Type: On-site Req ID: 17935
Posted 3 weeks ago
5.0 - 7.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Introduction In this role, you'll work in one of our IBM Consulting Client Innovation Centers (Delivery Centers), where we deliver deep technical and industry expertise to a wide range of public and private sector clients around the world. Our delivery centers offer our clients locally based skills and technical expertise to drive innovation and adoption of new technology. Your Role And Responsibilities As an Software Developer at IBM you will harness the power of data to unveil captivating stories and intricate patterns. You'll contribute to data gathering, storage, and both batch and real-time processing. Collaborating closely with diverse teams, you'll play an important role in deciding the most suitable data management systems and identifying the crucial data required for insightful analysis. As a Data Engineer, you'll tackle obstacles related to database integration and untangle complex, unstructured data sets. In This Role, Your Responsibilities May Include Implementing and validating predictive models as well as creating and maintain statistical models with a focus on big data, incorporating a variety of statistical and machine learning techniques Designing and implementing various enterprise search applications such as Elasticsearch and Splunk for client requirements Work in an Agile, collaborative environment, partnering with other scientists, engineers, consultants and database administrators of all backgrounds and disciplines to bring analytical rigor and statistical methods to the challenges of predicting behaviours. Build teams or writing programs to cleanse and integrate data in an efficient and reusable manner, developing predictive or prescriptive models, and evaluating modelling results Preferred Education Master's Degree Required Technical And Professional Expertise Bachelor’s degree in computer science, Supply Chain, Information Systems, or related field. Minimum of 5-7 years of experience in Master Data Management or related field. 3-5 years SAP experience / ERP experience required with strong exposure to at least two of the functional areas described above - Proven experience in leading an MDM team. - Strong knowledge of data governance principles, best practices, and technologies. - Experience with data profiling, cleansing, and enrichment tools. Ability to work with cross-functional teams to understand and address their master data needs. Proven ability to build predictive analytics tools using PowerBI, Spotfire or otherwise Preferred Technical And Professional Experience You thrive on teamwork and have excellent verbal and written communication skills. Ability to communicate with internal and external clients to understand and define business needs, providing analytical solutions Ability to communicate results to technical and non-technical audiences
Posted 3 weeks ago
15.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Join our Team About this opportunity: The Head of Automation owns and leads Automation strategy and execution providing leadership and vision to the organization. Collaborating closely with the other Heads of Department and Individual Contributors to ensure E2E management and success of delivery. What you will do: Drive operational efficiency and productivity through quality automation models, aligning with SDE targets and boosting automation saturation in MS Networks. Focus on stable automation performance with reduced outages and stronger operational outcomes. Collaborate with SDAP for streamlined automation monitoring, issue tracking, and reporting. Align automation initiatives with BCSS MS Networks’ AI/ML strategy. Enhance communication on automation benefits and their business impact. Manage O&M, lifecycle, and performance of SL Operate tools, ensuring clear automation SLAs and effective tracking. Contribute to service architecture strategies (EOE, AAA) to maximize automation value and roadmap alignment. Institutionalize best practices and automate internal team processes to reduce manual efforts. The skills you bring: 15+ years of experience in managed services environment, with minimum 8+ years of experience in Managed Services operations University degree in Engineering, Mathematics or Business Administration, MBA is a plus. Strong grasp of managed services delivery and Ericsson SD processes. Deep understanding of operator business needs and service delivery models. Skilled in Ericsson process measurement tools and SL Operate/SDAP MSDP environments (eTiger, ACE, etc.). Technically proficient in OOAD, design patterns, and development on Unix/Linux/Windows with Java, JS, DB, Shell scripts, and monitoring tools like Nagios. Familiar with software component interactions and DevOps practices. Hands-on with automation tools (Enable, Blue Prism, MATE) and monitoring platforms (Grafana, Zabbix, Elasticsearch, Graylog). Strong experience with web/proxy/app servers (Tomcat, Nginx, Apache). Why join Ericsson? At Ericsson, you´ll have an outstanding opportunity. The chance to use your skills and imagination to push the boundaries of what´s possible. To build solutions never seen before to some of the world’s toughest problems. You´ll be challenged, but you won’t be alone. You´ll be joining a team of diverse innovators, all driven to go beyond the status quo to craft what comes next. What happens once you apply? Click Here to find all you need to know about what our typical hiring process looks like. Encouraging a diverse and inclusive organization is core to our values at Ericsson, that's why we champion it in everything we do. We truly believe that by collaborating with people with different experiences we drive innovation, which is essential for our future growth. We encourage people from all backgrounds to apply and realize their full potential as part of our Ericsson team. Ericsson is proud to be an Equal Opportunity Employer. learn more. Primary country and city: India (IN) || Noida Req ID: 770844
Posted 3 weeks ago
5.0 years
0 Lacs
Pune, Maharashtra, India
Remote
Description and Requirements "At BMC trust is not just a word - it's a way of life!" Hybrid Description and Requirements "At BMC trust is not just a word - it's a way of life!" We are an award-winning, equal opportunity, culturally diverse, fun place to be. Giving back to the community drives us to be better every single day. Our work environment allows you to balance your priorities, because we know you will bring your best every day. We will champion your wins and shout them from the rooftops. Your peers will inspire, drive, support you, and make you laugh out loud! We help our customers free up time and space to become an Autonomous Digital Enterprise that conquers the opportunities ahead - and are relentless in the pursuit of innovation! The IZOT product line includes BMC’s Intelligent Z Optimization & Transformation products, which help the world’s largest companies to monitor and manage their mainframe systems. The modernization of mainframe is the beating heart of our product line, and we achieve this goal by developing products that improve the developer experience, the mainframe integration, the speed of application development, the quality of the code and the applications’ security, while reducing operational costs and risks. We acquired several companies along the way, and we continue to grow, innovate, and perfect our solutions on an ongoing basis. BMC is looking for a talented Devops Engineer to join our family who are just as passionate about solving issues with distributed systems as they are to automate, code and collaborate to tackle problem. Here is how, through this exciting role, YOU will contribute to BMC's and your own success: As a DevOps engineer with experience in Linux systems you will be networking, monitoring and automation, containerization, cloud technologies etc., and a proven interest and experience in using software engineering to solve operational problems. Write software to automate API-driven tasks at a scale. NodeJS and Python preferred. Participate in SRE software engineering, writing code for the continuing reduction of human intervention in operational tasks and automation of processes. Manage Cloud provider infrastructure, system deployments and product release operations. Monitor the application ecosystem, responding to incidents, correcting, and improving systems to prevent incidents and planning capacity and own resolving Elasticsearch related customer issues. Participate in 24x365 on-call schedules As every BMC employee, you will be given the opportunity to learn, be included in global projects, challenge yourself and be the innovator when it comes to solving everyday problems. To ensure you’re set up for success, you will bring the following skillset & experience: You can embrace, live and breathe our BMC values every day! 3–5 years of hands-on experience in a DevOps, SRE, or Infrastructure role. Thorough understanding of logging and monitoring tools ELK Stack, Prometheus, Grafana, etc. Solid understanding of Linux system administration and basic networking. Experience with at least one scripting language (Python, Bash, etc.). Familiarity with DevOps tools such as Git, Jenkins, Docker, and CI/CD pipelines. Exposure to monitoring/logging tools like ELK Stack, Grafana, or Prometheus. You have experience using a Public Cloud: AWS, GCP, Azure or OpenStack. AWS is preferred. You have experience working remotely with a fully distributed team, with the communication and adaptability it requires. You have experience mentoring and helping folks grow their abilities to use/contribute to the tooling you help build and experience with building public cloud agnostic software CA-DNP Our commitment to you! BMC’s culture is built around its people. We have 6000+ brilliant minds working together across the globe. You won’t be known just by your employee number, but for your true authentic self. BMC lets you be YOU! If after reading the above, You’re unsure if you meet the qualifications of this role but are deeply excited about BMC and this team, we still encourage you to apply! We want to attract talents from diverse backgrounds and experience to ensure we face the world together with the best ideas! BMC is committed to equal opportunity employment regardless of race, age, sex, creed, color, religion, citizenship status, sexual orientation, gender, gender expression, gender identity, national origin, disability, marital status, pregnancy, disabled veteran or status as a protected veteran. If you need a reasonable accommodation for any part of the application and hiring process, visit the accommodation request page. BMC Software maintains a strict policy of not requesting any form of payment in exchange for employment opportunities, upholding a fair and ethical hiring process. At BMC we believe in pay transparency and have set the midpoint of the salary band for this role at 2,117,800 INR. Actual salaries depend on a wide range of factors that are considered in making compensation decisions, including but not limited to skill sets; experience and training, licensure, and certifications; and other business and organizational needs. The salary listed is just one component of BMC's employee compensation package. Other rewards may include a variable plan and country specific benefits. We are committed to ensuring that our employees are paid fairly and equitably, and that we are transparent about our compensation practices. ( Returnship@BMC ) Had a break in your career? No worries. This role is eligible for candidates who have taken a break in their career and want to re-enter the workforce. If your expertise matches the above job, visit to https://bmcrecruit.avature.net/returnship know more and how to apply.
Posted 3 weeks ago
6.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Who Are We? Postman is the world’s leading API platform, used by more than 40 million developers and 500,000 organizations, including 98% of the Fortune 500. Postman is helping developers and professionals across the globe build the API-first world by simplifying each step of the API lifecycle and streamlining collaboration—enabling users to create better APIs, faster. The company is headquartered in San Francisco and has an office in Bangalore, where it was founded. Postman is privately held, with funding from Battery Ventures, BOND, Coatue, CRV, Insight Partners, and Nexus Venture Partners. Learn more at postman.com or connect with Postman on X via @getpostman. P.S: We highly recommend reading The "API-First World" graphic novel to understand the bigger picture and our vision at Postman. About Us Search Team at Postman is responsible for enabling users to quickly find and get started with the APIs that they are looking for. Postman is growing at a rapid pace, and this manifests into an ever-increasing volume of data that users create and consume, within their teams and in the Public API Network. We focus on improving discovery and ease of consumption over this data. The Role We are looking for a Senior Engineer with 6+ years of experience deep backend expertise on search and ETL systems and a strong product mindset, to lead core initiatives on our search platform. In this role, you'll work at the intersection of infrastructure, relevance, and developer experience—designing systems that power search across the platform. You’ll bring a bias for action, a strong backend foundation, and the curiosity to explore beyond traditional boundaries, including areas like high performance web services, high volume data pipelines, machine learning, and relevance tuning. What You'll Do Own end to end architecture and roadmap of search platform consisting of distributed indexing pipelines, storage infra and high performance web servers. Contribute to improving relevance of search results through signal engineering and better data models. Keep the system performant and reliable to handle growing data volume and query traffic, while unblocking business requirements and managing risk. Collaborate with cross-functional stakeholders like Product Managers, Designers, as well as other teams to drive initiatives. Uphold operational excellence in the team, showing a bias for action and user empathy. Quickly build functional prototypes to solve internal and external use-cases. Scale the technical abilities of engineers in the team and uphold quality through code reviews and mentorship. About You You have 6+ years of experience building applications in high level programming languages - Javascript, Python, Java etc. We code primarily in Python and Javascript. You have worked on customer facing search solutions and have hands-on experience with search systems like ElasticSearch/Apache Solr/OpenSearch. Experience building systems to orchestrate batch and streaming pipelines using Apache Kafka, Kinesis, Lambdas. Knowledge of compute engine/data processing engine like Apache Spark/ Apache Flink/ Ray. You have displayed strength in AI, Natural Language Processing, Ranking Systems or Recommendation Systems. You possess strong Computer Science fundamentals - Algorithms, Networking and Operating Systems, and are familiar with various programming tools, frameworks, and development practices. You write testable, maintainable code with SOLID principles that’s easy to understand. You are an excellent communicator who can articulate technical concepts to product managers, designers, support and other engineers. What Else? In addition to Postman's pay-on-performance philosophy, and a flexible schedule working with a fun, collaborative team, Postman offers a comprehensive set of benefits, including full medical coverage, flexible PTO, wellness reimbursement, and a monthly lunch stipend. Along with that, our wellness programs will help you stay in the best of your physical and mental health. Our frequent and fascinating team-building events will keep you connected, while our donation-matching program can support the causes you care about. We’re building a long-term company with an inclusive culture where everyone can be the best version of themselves. At Postman, we embrace a hybrid work model. For all roles based out of San Francisco Bay Area, Boston, Bangalore, Hyderabad, and New York, employees are expected to come into the office 3-days a week. We were thoughtful in our approach which is based on balancing flexibility and collaboration and grounded in feedback from our workforce, leadership team, and peers. The benefits of our hybrid office model will be shared knowledge, brainstorming sessions, communication, and building trust in-person that cannot be replicated via zoom. Our Values At Postman, we create with the same curiosity that we see in our users. We value transparency and honest communication about not only successes, but also failures. In our work, we focus on specific goals that add up to a larger vision. Our inclusive work culture ensures that everyone is valued equally as important pieces of our final product. We are dedicated to delivering the best products we can. Equal opportunity Postman is an Equal Employment Opportunity and Affirmative Action Employer. Qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender perception or identity, national origin, age, marital status, protected veteran status, or disability status. Headhunters and recruitment agencies may not submit resumes/CVs through this website or directly to managers. Postman does not accept unsolicited headhunter and agency resumes. Postman will not pay fees to any third-party agency or company that does not have a signed agreement with Postman.
Posted 3 weeks ago
25.0 years
9 Lacs
Chennai
On-site
The Company PayPal has been revolutionizing commerce globally for more than 25 years. Creating innovative experiences that make moving money, selling, and shopping simple, personalized, and secure, PayPal empowers consumers and businesses in approximately 200 markets to join and thrive in the global economy. We operate a global, two-sided network at scale that connects hundreds of millions of merchants and consumers. We help merchants and consumers connect, transact, and complete payments, whether they are online or in person. PayPal is more than a connection to third-party payment networks. We provide proprietary payment solutions accepted by merchants that enable the completion of payments on our platform on behalf of our customers. We offer our customers the flexibility to use their accounts to purchase and receive payments for goods and services, as well as the ability to transfer and withdraw funds. We enable consumers to exchange funds more safely with merchants using a variety of funding sources, which may include a bank account, a PayPal or Venmo account balance, PayPal and Venmo branded credit products, a credit card, a debit card, certain cryptocurrencies, or other stored value products such as gift cards, and eligible credit card rewards. Our PayPal, Venmo, and Xoom products also make it safer and simpler for friends and family to transfer funds to each other. We offer merchants an end-to-end payments solution that provides authorization and settlement capabilities, as well as instant access to funds and payouts. We also help merchants connect with their customers, process exchanges and returns, and manage risk. We enable consumers to engage in cross-border shopping and merchants to extend their global reach while reducing the complexity and friction involved in enabling cross-border trade. Our beliefs are the foundation for how we conduct business every day. We live each day guided by our core values of Inclusion, Innovation, Collaboration, and Wellness. Together, our values ensure that we work together as one global team with our customers at the center of everything we do – and they push us to ensure we take care of ourselves, each other, and our communities. Job Summary: This job will will lead the design and implementation of complex data systems and architectures. You will work with stakeholders to understand requirements and deliver solutions. Your role involves driving best practices in data engineering, ensuring data quality, and mentoring junior engineers. Job Description: Essential Responsibilities: Lead the design and development of complex data pipelines for data collection and processing. Develop and maintain advanced data storage solutions. Ensure data quality and consistency through sophisticated validation and cleansing processes. Implement advanced data transformation techniques to prepare data for analysis. Collaborate with cross-functional teams to understand data requirements and provide innovative solutions. Optimize data engineering processes for performance, scalability, and reliability. Minimum Qualifications: Minimum of 5 years of relevant work experience and a Bachelor's degree or equivalent experience. Preferred Qualification: We are the Data Foundational Services (DFS) team within the Data Analytics and Intelligence Solutions (DAIS) organization. Our mission is to integrate data from PayPal and its brands into a unified data platform , enabling seamless data access for operational and analytical applications . We support critical business use cases, ensuring high-quality, scalable, and reliable data solutions across the enterprise. Minimum Qualifications: 8-10 years of relevant work experience and a Bachelor's degree or equivalent experience. Required Skills: Software Development Expertise – Strong proficiency in back-end (Java or Python) technologies, including building and maintaining scalable services. Having front-end (React, Angular, JavaScript, HTML, CSS) experience is an advantage. Data Engineering & Cloud Technologies – Experience working with databases (Oracle, MySQL, PostgreSQL) , Big Data technologies (Hadoop, Spark, Kafka, ElasticSearch/Solr) , and cloud platforms (AWS, GCP, Azure) . Scalability, Performance & Security – Deep understanding of designing systems for high availability, performance, and security , preferably in regulated industries like financial services. Subsidiary: PayPal Travel Percent: 0 PayPal does not charge candidates any fees for courses, applications, resume reviews, interviews, background checks, or onboarding. Any such request is a red flag and likely part of a scam. To learn more about how to identify and avoid recruitment fraud please visit https://careers.pypl.com/contact-us . For the majority of employees, PayPal's balanced hybrid work model offers 3 days in the office for effective in-person collaboration and 2 days at your choice of either the PayPal office or your home workspace, ensuring that you equally have the benefits and conveniences of both locations. Our Benefits: At PayPal, we’re committed to building an equitable and inclusive global economy. And we can’t do this without our most important asset—you. That’s why we offer benefits to help you thrive in every stage of life. We champion your financial, physical, and mental health by offering valuable benefits and resources to help you care for the whole you. We have great benefits including a flexible work environment, employee shares options, health and life insurance and more. To learn more about our benefits please visit https://www.paypalbenefits.com . Who We Are: Commitment to Diversity and Inclusion PayPal provides equal employment opportunity (EEO) to all persons regardless of age, color, national origin, citizenship status, physical or mental disability, race, religion, creed, gender, sex, pregnancy, sexual orientation, gender identity and/or expression, genetic information, marital status, status with regard to public assistance, veteran status, or any other characteristic protected by federal, state, or local law. In addition, PayPal will provide reasonable accommodations for qualified individuals with disabilities. If you are unable to submit an application because of incompatible assistive technology or a disability, please contact us at paypalglobaltalentacquisition@paypal.com . Belonging at PayPal: Our employees are central to advancing our mission, and we strive to create an environment where everyone can do their best work with a sense of purpose and belonging. Belonging at PayPal means creating a workplace with a sense of acceptance and security where all employees feel included and valued. We are proud to have a diverse workforce reflective of the merchants, consumers, and communities that we serve, and we continue to take tangible actions to cultivate inclusivity and belonging at PayPal. Any general requests for consideration of your skills, please Join our Talent Community . We know the confidence gap and imposter syndrome can get in the way of meeting spectacular candidates. Please don’t hesitate to apply.
Posted 3 weeks ago
5.0 years
0 Lacs
Coimbatore, Tamil Nadu, India
On-site
Key Responsibilities Design, develop and test robust and scalable backend systems using Python. Develop and integrate RESTful APIs and third-party services. Write clean, maintainable, and well-documented code. Collaborate with front-end developers, designers, and product managers to implement features. Optimize application performance and resolve bugs and issues. Perform unit and integration testing to ensure software quality. Participate in code reviews and mentor junior developers if needed. Maintain and enhance existing applications. Requirements: 5+ years of professional experience in Python development. Strong understanding of core Python concepts, OOPs Experience with at least one web framework like Django, Flask, or FastAPI. Familiarity with RESTful API development and integration. Proficiency in working with relational and NoSQL databases (e.g., MySQL, MongoDB) and ORMs (e.g., SQLAlchemy, Django ORM). Good knowledge of unit testing frameworks like unittest, pytest. Experience with version control systems like Git. Experience with microservices architecture. Understanding of application security best practices (e.g., SQL injection prevention, secure API development). Exposure to message brokers like RabbitMQ, Kafka, or Celery. Experience with ELK stack (Elasticsearch, Logstash, Kibana) for logging and monitoring. Basic knowledge of Docker and containerization concepts Understanding of CI/CD pipelines and tools like Jenkins, GitLab CI, or GitHub Actions Preferred Skills Basic understanding of software design patterns (e.g., Singleton, Factory, Strategy) Exposure to AI and automation tools such as Flowise and n8n
Posted 3 weeks ago
2.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Data Engineer Experience: 2 - 4 Years Exp Salary: Competitive Preferred Notice Period : 30 Days Shift : 9:00 AM to 6:00 PM IST Opportunity Type: Office (Gurugram) Placement Type: Permanent (*Note: This is a requirement for one of Uplers' Clients) Must have skills required : Python and Airflow and Elasticsearch Trademo (One of Uplers' Clients) is Looking for: Data Engineer who is passionate about their work, eager to learn and grow, and who is committed to delivering exceptional results. If you are a team player, with a positive attitude and a desire to make a difference, then we want to hear from you. Role Overview Description What will you be doing here? Responsible for the maintenance and growth of a 50TB+ data pipeline serving global SaaS products for businesses, including onboarding new data and collaborating with pre-sales to articulate technical solutions Solves complex problems across large datasets by applying algorithms, particularly within the domains of Natural Language Processing (NLP) and Large Language Models (LLM) Leverage bleeding-edge technology to work with large volumes of complex data Be hands-on in development - Python, Pandas, NumPy, ETL frameworks. Preferred exposure to distributed computing frameworks like Apache Spark , Apache Kafka, Apache Airflow, Elasticsearch Along with individual data engineering contributions, actively help peers and junior team members on architecture and code to ensure the development of scalable, accurate, and highly available solutions Collaborate with teams and share knowledge via tech talks and promote tech and engineering best practices within the team. Requirements: B-Tech/M-Tech in Computer Science from IIT or equivalent Tier 1 Colleges. 3+ years of relevant work experience in data engineering or related roles. Proven ability to efficiently work with a high variety and volume of data (50TB+ pipeline experience is a plus). Solid understanding and preferred exposure to NoSQL databases, including Elasticsearch, MongoDB, and GraphDB. Basic understanding of working within Cloud infrastructure and Cloud Native Apps (AWS, Azure, IBM , etc.). Exposure to core data engineering concepts and tools: Data warehousing, ETL processes, SQL, and NoSQL databases. Great problem-solving ability over a larger set of data and the ability to apply algorithms, with a plus for experience using NLP and LLM. Willingness to learn and apply new techniques and technologies to extract intelligence from data, with prior exposure to Machine Learning and NLP being a significant advantage. Sound understanding of Algorithms and Data Structures. Ability to write well-crafted, readable, testable, maintainable, and modular code. Desired Profile: A hard-working, humble disposition. Desire to make a strong impact on the lives of millions through your work. Capacity to communicate well with stakeholders as well as team members and be an effective interface between the Engineering and Product/Business team. A quick thinker who can adapt to a fast-paced startup environment and work with minimum supervision What we offer: At Trademo, we want our employees to be comfortable with their benefits so they focus on doing the work they love. Parental leave - Maternity and Paternity Health Insurance Flexible Time Offs Stock Options How to apply for this opportunity: Easy 3-Step Process: 1. Click On Apply! And Register or log in on our portal 2. Upload updated Resume & Complete the Screening Form 3. Increase your chances to get shortlisted & meet the client for the Interview! About Our Client: Trademo is a Global Supply Chain Intelligence SaaS Company, headquartered in Palo-Alto, US. Trademo collects public and private data on global trade transactions, sanctioned parties, trade tariffs, ESG and other events using its proprietary algorithms. About Uplers: Our goal is to make hiring and getting hired reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant product and engineering job opportunities and progress in their career. (Note: There are many more opportunities apart from this on the portal.) So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!
Posted 3 weeks ago
6.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
As a Senior Software DevOps Engineer, you will lead the design,implementation, and evolution of telemetry pipelines and DevOps automation that enable next-generation observability for distributed systems. You will blend a deep understanding of Open Telemetry architecture with strong DevOps practices to build a reliable, high-performance and self-service observability platform across hybrid cloud environments (AWS & Azure). Your mission: empower engineering teams with actionable insights through rich metrics, logs, and traces, while championing automation and innovation at every layer. WHAT YOU WILL BE DOING Observability Strategy & Implementation Architect and manage scalable observability solutions using OpenTelemetry (OTel),encompassing: Collectors: Design and deploy OTel Collectors (agent/gateway modes) for ingesting and exporting telemetry across services Instrumentation: Guide teams on auto/manual instrumentation for services (metrics, traces, and logs) Export Pipelines: Build telemetry pipelines to route data to backends like Grafana, Prometheus, Loki, New Relic, and Azure Monitor Processors & Extensions: Leverage OTel processors (batching, filtering, resource detection) and extensions for advanced enrichment and routing. DevOps Automation & Platform Reliability Own the CI/CD experience using GitLab Pipelines, integrating infrastructure automation with Terraform, Docker, and scripting in Bash and Python Build resilient and reusable infrastructure-as-code modules across AWS and Azure ecosystems.Manage containerized workloads, registries, secrets, and secure cloud-native deployments with best practices Cloud-Native Enablement Develop observability blueprints for cloud-native apps across AWS (ECS, EC2, VPC,IAM, CloudWatch) and Azure (AKS, App Services, Monitor) Optimize cost and performance of telemetry pipelines while ensuring SLA/SLO adherence for observability services Monitoring, Dashboards, and Alerting Build and maintain intuitive, role-based dashboards in Grafana ,New Relic..., enabling real-time visibility into service health, business KPIs, and SLOs. Implement alerting best practices (noise reduction, deduplication, alert grouping)integrated with incident management systems Innovation & Technical Leadership Drive cross-team observability initiatives that reduce MTTR and elevate engineering velocity Champion innovation projects—including self-service observability onboarding, log/metric reduction strategies, AI-assisted root cause detection, and more Mentor engineering teams on instrumentation, telemetry standards, and operational excellence WHAT YOU BRING 6+years of experience in DevOps, Site Reliability Engineering, or Observability roles Deep expertise with OpenTelemetry, including Collector configurations, receivers/exporters (OTLP, HTTP, Prometheus, Loki), and semantic conventions Proficient in GitLab CI/CD, Terraform, Docker, and scripting (Python, Bash, Go). Strong hands-on experience with AWS and Azure services, cloud automation, and cost optimization Proficiency with observability backends: Grafana, New Relic, Prometheus, Loki, or equivalent APM/log platforms Passion for building automated, resilient, and scalable telemetry pipelines Excellent documentation and communication skills to drive adoption and influence engineering culture Nice to Have) Certifications in AWS, Azure, or Terraform Experience with OpenTelemetry SDKs in Go, Java, or Node.js Familiarity with SLO management, error budgets, and observability-as-code approaches Exposure to event streaming (Kafka,rabbitmq), Elasticsearch ,Vault,consul
Posted 3 weeks ago
0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Introduction In this role, you'll work in one of our IBM Consulting Client Innovation Centers (Delivery Centers), where we deliver deep technical and industry expertise to a wide range of public and private sector clients around the world. Our delivery centers offer our clients locally based skills and technical expertise to drive innovation and adoption of new technology. Your Role And Responsibilities As Data Engineer at IBM you will harness the power of data to unveil captivating stories and intricate patterns. You'll contribute to data gathering, storage, and both batch and real-time processing. Collaborating closely with diverse teams, you'll play an important role in deciding the most suitable data management systems and identifying the crucial data required for insightful analysis. As a Data Engineer, you'll tackle obstacles related to database integration and untangle complex, unstructured data sets. In this role, your responsibilities may include: Implementing and validating predictive models as well as creating and maintain statistical models with a focus on big data, incorporating a variety of statistical and machine learning techniques Designing and implementing various enterprise search applications such as Elasticsearch and Splunk for client requirements Work in an Agile, collaborative environment, partnering with other scientists, engineers, consultants and database administrators of all backgrounds and disciplines to bring analytical rigor and statistical methods to the challenges of predicting behaviour’s. Build teams or writing programs to cleanse and integrate data in an efficient and reusable manner, developing predictive or prescriptive models, and evaluating modelling results. Preferred Education Master's Degree Required Technical And Professional Expertise Expertise in Data warehousing/ information Management/ Data Integration/Business Intelligence using ETL tool Informatica PowerCenter Knowledge of Cloud, Power BI, Data migration on cloud skills. Experience in Unix shell scripting and python Experience with relational SQL, Big Data etc Preferred Technical And Professional Experience Knowledge of MS-Azure Cloud Experience in Informatica PowerCenter Experience in Unix shell scripting and python
Posted 3 weeks ago
12.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
About Zeta Zeta is a Next-Gen Banking Tech company that empowers banks and fintechs to launch banking products for the future. It was founded by Bhavin Turakhia and Ramki Gaddipati in 2015. Our f lagship processing platform - Zeta Tachyon - is the industry’s first modern, cloud-native, and fully API-enabled stack that brings together issuance, processing, lending, core banking, fraud & risk, and many more capabilities as a single-vendor stack. 15M+ cards have been issued on our platform globally. Zeta is actively working with the largest Banks and Fintechs in multiple global markets transforming customer experience for multi-million card portfolios. Zeta has over 1700+employees - with over 70%roles in R&D - across locations in the US,EMEA, and Asia. We raised$280 million at a$1.5 billion valuation from Softbank, Mastercard, and other investors in 2021.Learn more @ www.zeta.tech , careers.zeta.tech , Linkedin , Twitter The Role As part of the Risk & Compliance team within the Engineering division at Zeta, the Application Security Manager is tasked with safeguarding all mobile, web applications, and APIs. This involves identifying vulnerabilities through testing and ethical hacking, while also educating developers and DevOps teams on how to resolve them. Your primary goal will be to ensure the security of Zeta's applications and platforms. As a manager, you'llbe responsible for securing all of Zeta’s products. In this individual contributor role, you will report directly to the Chief Information Security Officer (CISO). The role involves ensuring the security of web and mobile applications, APIs, and infrastructure by conducting regular VAPT. It requires providing expert guidance to developers on how to address and fix security vulnerabilities, along with performing code reviews to identify potential security issues. The role also includes actively participating in application design discussions to ensure security is integrated from the beginning and leading Threat Modeling exercises to identify potential threats. Additionally, the profile focuses on developing and promoting secure coding practices, educating developers and QA engineers on security standards for secure coding, data handling, network security, and encryption. The role also entails evaluating and integrating security testing tools like SAST, DAST, and SCA into the CI/CD pipeline to enhance continuous security integration. Responsibilities Guide Security and Privacy Initiatives: Actively participate in design reviews and threat modeling sessions to help shape the security and privacy approach for technology projects, ensuring security is embedded at all stages of application development. Ensure Secure Application Development: Collaborate with developers and product managers to ensure that applications are securely developed, hardened, and aligned with industry best practices. Project Scope Management: Define the scope for security initiatives, ensuring continuous adherence throughout each project phase, from initiation to sustenance/maintenance. Drive Internal Adoption and Visibility: Ensure that security projects are well-understood and adopted by internal stakeholders, fostering a culture of security awareness within the organization. Security Engineering Expertise: Serve as a technical expert and security champion within Zeta, providing guidance and expertise on security best practices across the organization. Team Leadership and Development Make decisions on hiring and lead the hiring process to build a skilled security team. Define and drive improvements in the hiring process to attract top security talent. Mentor and guide developers and QA teams on secure coding practices and security awareness. Security Tool and Gap Assessment: Continuously assess and recommend tools to address gaps in application security, ensuring the team is equipped with the best resources to identify and address vulnerabilities. Stakeholder Liaison: Collaborate with both internal and external stakeholders to ensure alignment on security requirements and deliverables, acting as the main point of contact for all security-related matters within the team. Bug Bounty Program Management: Evaluate and triage security bugs reported through the Bug Bounty program, working with relevant teams to address and resolve issues effectively. Own Security Posture: Take ownership of the security posture of various applications across the business units, ensuring that security best practices are consistently applied and maintained. Skills Hands-on experience in Vulnerability Assessment (VA) and Penetration Testing (PT) across web, mobile, API, and network/Infra environments. Deep understanding of the OWASP Top 10 and their respective attack and defense mechanisms. Strong exposure to Secure SDLC activities, Threat Modeling, and Secure Coding practices. Experience with both commercial and open-source security tools, including Burp Suite, AppScan, OWASP ZAP, BEEF, Metasploit, Qualys, Nipper, Nessus andSnyk. Expertise in identifying and exploiting business logic vulnerabilities. Solid understanding of cryptography, PKI-based systems, and TLS protocols. Proficiency in various AuthN/AuthZ frameworks (OIDC, OAuth, SAML) and the ability to read, write, and understand Java code. Experience with Static Analysis and Code Reviews using tools like Snyk,Fortify,Veracode, Checkmarx, and SonarQube. Hands-on experience in reverse engineering mobile apps and using tools like Dex2jar, ADB, Drozer, Clang, iMAS, and Frida/Objection for dynamic instrumentation. Experience conducting penetration tests and security assessments on internal/external networks, Windows/Linux environments, and cloud infrastructure (primarily AWS). Ability to identify and exploit security vulnerabilities and misconfigurations in Windows and Linux servers. Proficiency in shell scripting and automating tasks with tools such as Python or Ruby. Familiarity with PA-DSS, PCI SSF (S3, SSLC), and other security standards like PCI DSS, DPSC, ASVS and NIST. Understanding of Java frameworks like Spring Boot, CI/CD processes, and tools like Jenkins & Bitrise. In-depth knowledge of cloud infrastructure (AWS, Azure), including VPC/VNet, S3 buckets, IAM,Security Groups, blob stores, Load Balancers, Docker containers, and Kubernetes. Solid understanding of agile development practices. Active participation in bug bounty programs (HackerOne, Bug Crowd, etc.) and experience with hackathons and Capture the Flag (CTF) competitions. Knowledge of AWS/Azure services, including network configuration and security management. Experience with databases (PostgreSQL, Redshift, MySQL) and other data storage solutions like Elasticsearch and S3 buckets. Preferred Certifications: OSCP, OSWE, GWAPT, AWAE, AWS Certified Security Specialist, CompTIA Security+ Experience And Qualifications 12 to 18 years of overall experience in application security, with a strong background in identifying and mitigating vulnerabilities in software applications. A background in development and experience in the fintech sector is a plus. Bachelor of Technology (BE/ B.Tech ), M.Tech , or ME in Computer Science or an equivalent degree from an Engineering college/University. Life At Zeta At Zeta, we want you to grow to be the best version of yourself by unlocking the great potential that lies within you. This is why our core philosophy is ‘People Must Grow.’ We recognize your aspirations; act as enablers by bringing you the right opportunities, and let you grow as you chase disruptive goals. is adventurous and exhilarating at the same time. You get to work with some of the best minds in the industry and experience a culture that values the diversity of thoughts. If you want to push boundaries, learn continuously and grow to be the best version of yourself, Zeta is the place to be! Explore the life at zeta Zeta is an equal opportunity employer. At Zeta, we are committed to equal employment opportunities regardless of job history, disability, gender identity, religion, race, marital/parental status, or another special status. We are proud to be an equitable workplace that welcomes individuals from all walks of life if they fit the roles and responsibilities.
Posted 3 weeks ago
2.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Data Engineer Experience: 2 - 4 Years Exp Salary: Competitive Preferred Notice Period : 30 Days Shift : 9:00 AM to 6:00 PM IST Opportunity Type: Office (Gurugram) Placement Type: Permanent (*Note: This is a requirement for one of Uplers' Clients) Must have skills required : Python and Airflow and Elasticsearch Trademo (One of Uplers' Clients) is Looking for: Data Engineer who is passionate about their work, eager to learn and grow, and who is committed to delivering exceptional results. If you are a team player, with a positive attitude and a desire to make a difference, then we want to hear from you. Role Overview Description What will you be doing here? Responsible for the maintenance and growth of a 50TB+ data pipeline serving global SaaS products for businesses, including onboarding new data and collaborating with pre-sales to articulate technical solutions Solves complex problems across large datasets by applying algorithms, particularly within the domains of Natural Language Processing (NLP) and Large Language Models (LLM) Leverage bleeding-edge technology to work with large volumes of complex data Be hands-on in development - Python, Pandas, NumPy, ETL frameworks. Preferred exposure to distributed computing frameworks like Apache Spark , Apache Kafka, Apache Airflow, Elasticsearch Along with individual data engineering contributions, actively help peers and junior team members on architecture and code to ensure the development of scalable, accurate, and highly available solutions Collaborate with teams and share knowledge via tech talks and promote tech and engineering best practices within the team. Requirements: B-Tech/M-Tech in Computer Science from IIT or equivalent Tier 1 Colleges. 3+ years of relevant work experience in data engineering or related roles. Proven ability to efficiently work with a high variety and volume of data (50TB+ pipeline experience is a plus). Solid understanding and preferred exposure to NoSQL databases, including Elasticsearch, MongoDB, and GraphDB. Basic understanding of working within Cloud infrastructure and Cloud Native Apps (AWS, Azure, IBM , etc.). Exposure to core data engineering concepts and tools: Data warehousing, ETL processes, SQL, and NoSQL databases. Great problem-solving ability over a larger set of data and the ability to apply algorithms, with a plus for experience using NLP and LLM. Willingness to learn and apply new techniques and technologies to extract intelligence from data, with prior exposure to Machine Learning and NLP being a significant advantage. Sound understanding of Algorithms and Data Structures. Ability to write well-crafted, readable, testable, maintainable, and modular code. Desired Profile: A hard-working, humble disposition. Desire to make a strong impact on the lives of millions through your work. Capacity to communicate well with stakeholders as well as team members and be an effective interface between the Engineering and Product/Business team. A quick thinker who can adapt to a fast-paced startup environment and work with minimum supervision What we offer: At Trademo, we want our employees to be comfortable with their benefits so they focus on doing the work they love. Parental leave - Maternity and Paternity Health Insurance Flexible Time Offs Stock Options How to apply for this opportunity: Easy 3-Step Process: 1. Click On Apply! And Register or log in on our portal 2. Upload updated Resume & Complete the Screening Form 3. Increase your chances to get shortlisted & meet the client for the Interview! About Our Client: Trademo is a Global Supply Chain Intelligence SaaS Company, headquartered in Palo-Alto, US. Trademo collects public and private data on global trade transactions, sanctioned parties, trade tariffs, ESG and other events using its proprietary algorithms. About Uplers: Our goal is to make hiring and getting hired reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant product and engineering job opportunities and progress in their career. (Note: There are many more opportunities apart from this on the portal.) So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!
Posted 3 weeks ago
2.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Data Engineer Experience: 2 - 4 Years Exp Salary: Competitive Preferred Notice Period : 30 Days Shift : 9:00 AM to 6:00 PM IST Opportunity Type: Office (Gurugram) Placement Type: Permanent (*Note: This is a requirement for one of Uplers' Clients) Must have skills required : Python and Airflow and Elasticsearch Trademo (One of Uplers' Clients) is Looking for: Data Engineer who is passionate about their work, eager to learn and grow, and who is committed to delivering exceptional results. If you are a team player, with a positive attitude and a desire to make a difference, then we want to hear from you. Role Overview Description What will you be doing here? Responsible for the maintenance and growth of a 50TB+ data pipeline serving global SaaS products for businesses, including onboarding new data and collaborating with pre-sales to articulate technical solutions Solves complex problems across large datasets by applying algorithms, particularly within the domains of Natural Language Processing (NLP) and Large Language Models (LLM) Leverage bleeding-edge technology to work with large volumes of complex data Be hands-on in development - Python, Pandas, NumPy, ETL frameworks. Preferred exposure to distributed computing frameworks like Apache Spark , Apache Kafka, Apache Airflow, Elasticsearch Along with individual data engineering contributions, actively help peers and junior team members on architecture and code to ensure the development of scalable, accurate, and highly available solutions Collaborate with teams and share knowledge via tech talks and promote tech and engineering best practices within the team. Requirements: B-Tech/M-Tech in Computer Science from IIT or equivalent Tier 1 Colleges. 3+ years of relevant work experience in data engineering or related roles. Proven ability to efficiently work with a high variety and volume of data (50TB+ pipeline experience is a plus). Solid understanding and preferred exposure to NoSQL databases, including Elasticsearch, MongoDB, and GraphDB. Basic understanding of working within Cloud infrastructure and Cloud Native Apps (AWS, Azure, IBM , etc.). Exposure to core data engineering concepts and tools: Data warehousing, ETL processes, SQL, and NoSQL databases. Great problem-solving ability over a larger set of data and the ability to apply algorithms, with a plus for experience using NLP and LLM. Willingness to learn and apply new techniques and technologies to extract intelligence from data, with prior exposure to Machine Learning and NLP being a significant advantage. Sound understanding of Algorithms and Data Structures. Ability to write well-crafted, readable, testable, maintainable, and modular code. Desired Profile: A hard-working, humble disposition. Desire to make a strong impact on the lives of millions through your work. Capacity to communicate well with stakeholders as well as team members and be an effective interface between the Engineering and Product/Business team. A quick thinker who can adapt to a fast-paced startup environment and work with minimum supervision What we offer: At Trademo, we want our employees to be comfortable with their benefits so they focus on doing the work they love. Parental leave - Maternity and Paternity Health Insurance Flexible Time Offs Stock Options How to apply for this opportunity: Easy 3-Step Process: 1. Click On Apply! And Register or log in on our portal 2. Upload updated Resume & Complete the Screening Form 3. Increase your chances to get shortlisted & meet the client for the Interview! About Our Client: Trademo is a Global Supply Chain Intelligence SaaS Company, headquartered in Palo-Alto, US. Trademo collects public and private data on global trade transactions, sanctioned parties, trade tariffs, ESG and other events using its proprietary algorithms. About Uplers: Our goal is to make hiring and getting hired reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant product and engineering job opportunities and progress in their career. (Note: There are many more opportunities apart from this on the portal.) So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!
Posted 3 weeks ago
2.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Data Engineer Experience: 2 - 4 Years Exp Salary: Competitive Preferred Notice Period : 30 Days Shift : 9:00 AM to 6:00 PM IST Opportunity Type: Office (Gurugram) Placement Type: Permanent (*Note: This is a requirement for one of Uplers' Clients) Must have skills required : Python and Airflow and Elasticsearch Trademo (One of Uplers' Clients) is Looking for: Data Engineer who is passionate about their work, eager to learn and grow, and who is committed to delivering exceptional results. If you are a team player, with a positive attitude and a desire to make a difference, then we want to hear from you. Role Overview Description What will you be doing here? Responsible for the maintenance and growth of a 50TB+ data pipeline serving global SaaS products for businesses, including onboarding new data and collaborating with pre-sales to articulate technical solutions Solves complex problems across large datasets by applying algorithms, particularly within the domains of Natural Language Processing (NLP) and Large Language Models (LLM) Leverage bleeding-edge technology to work with large volumes of complex data Be hands-on in development - Python, Pandas, NumPy, ETL frameworks. Preferred exposure to distributed computing frameworks like Apache Spark , Apache Kafka, Apache Airflow, Elasticsearch Along with individual data engineering contributions, actively help peers and junior team members on architecture and code to ensure the development of scalable, accurate, and highly available solutions Collaborate with teams and share knowledge via tech talks and promote tech and engineering best practices within the team. Requirements: B-Tech/M-Tech in Computer Science from IIT or equivalent Tier 1 Colleges. 3+ years of relevant work experience in data engineering or related roles. Proven ability to efficiently work with a high variety and volume of data (50TB+ pipeline experience is a plus). Solid understanding and preferred exposure to NoSQL databases, including Elasticsearch, MongoDB, and GraphDB. Basic understanding of working within Cloud infrastructure and Cloud Native Apps (AWS, Azure, IBM , etc.). Exposure to core data engineering concepts and tools: Data warehousing, ETL processes, SQL, and NoSQL databases. Great problem-solving ability over a larger set of data and the ability to apply algorithms, with a plus for experience using NLP and LLM. Willingness to learn and apply new techniques and technologies to extract intelligence from data, with prior exposure to Machine Learning and NLP being a significant advantage. Sound understanding of Algorithms and Data Structures. Ability to write well-crafted, readable, testable, maintainable, and modular code. Desired Profile: A hard-working, humble disposition. Desire to make a strong impact on the lives of millions through your work. Capacity to communicate well with stakeholders as well as team members and be an effective interface between the Engineering and Product/Business team. A quick thinker who can adapt to a fast-paced startup environment and work with minimum supervision What we offer: At Trademo, we want our employees to be comfortable with their benefits so they focus on doing the work they love. Parental leave - Maternity and Paternity Health Insurance Flexible Time Offs Stock Options How to apply for this opportunity: Easy 3-Step Process: 1. Click On Apply! And Register or log in on our portal 2. Upload updated Resume & Complete the Screening Form 3. Increase your chances to get shortlisted & meet the client for the Interview! About Our Client: Trademo is a Global Supply Chain Intelligence SaaS Company, headquartered in Palo-Alto, US. Trademo collects public and private data on global trade transactions, sanctioned parties, trade tariffs, ESG and other events using its proprietary algorithms. About Uplers: Our goal is to make hiring and getting hired reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant product and engineering job opportunities and progress in their career. (Note: There are many more opportunities apart from this on the portal.) So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!
Posted 3 weeks ago
5.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
As an SDE 3 you will be responsible for solving complex problems, elevating engineering and operational excellence, and leading new tech discovery and adoption. You will ensure high standards of code and design quality, mentor junior developers, and proactively manage technical risks to ensure successful project delivery. Responsibilities: Solving complex, ambiguous problems at a team level. Up the bar for Engineering Excellence. Up the bar for Operational excellence at the team level. New Tech discovery for the team. New Tech Adoption within the team. Custodian of code and design quality of the team. Coach SDE1s and SDE2s within the team. Proactively identify tech risk and de-risk projects in the team. Bring a culture of learning and innovation to the team. Builds a platform that improves the MTTD and MTTR. Create solutions to a vision statement. Analyze and improve system performance. Guide the team on coding patterns, languages, and frameworks. Requirements: Btech/ Mtech CSE From Tier 1 College. Computer Science fundamentals, object-oriented programming, design patterns, data structures, and algorithm design. Proficiency with Java stack (Java/Java Design Patterns ). Building scalable microservices and distributed systems. 5+ years of experience contributing to architecture and design in a product setup. Total work experience of 7 to 10 years in contributing to architecture and design in a product setup. Technology/ Tools: Spring, Hibernate, RabbitMQ, Kafka, Zookeeper, Elasticsearch. REST APIs. Database: Cassandra, MongoDB, Redis, MS-SQL, MySQL. Hands-on experience in working on a large scale. Hands-on experience in Low- and High-Level Design ( LLD + HLD ). Proficient in multiple programming languages and technology stacks. Expert in doing high-level design. Expert in CI/CD capabilities required to improve efficiency.
Posted 3 weeks ago
1.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Job Title: Software Full Stack Developer As a Fullstack SDE1 at NxtWave, you Get first hand experience of building applications and see them released quickly to the NxtWave learners (within weeks) Get to take ownership of the features you build and work closely with the product team Work in a great culture that continuously empowers you to grow in your career Enjoy freedom to experiment & learn from mistakes (Fail Fast, Learn Faster) NxtWave is one of the fastest growing edtech startups. Get first-hand experience in scaling the features you build as the company grows rapidly Build in a world-class developer environment by applying clean coding principles, code architecture, etc. Responsibilities Design, implement, and ship user-centric features spanning frontend, backend, and database systems under guidance. Define and implement RESTful/GraphQL APIs and efficient, scalable database schemas. Build reusable, maintainable frontend components using modern state management practices. Develop backend services in Node.js or Python, adhering to clean-architecture principles. Write and maintain unit, integration, and end-to-end tests to ensure code quality and reliability. Containerize applications and configure CI/CD pipelines for automated builds and deployments. Enforce secure coding practices, accessibility standards (WCAG), and SEO fundamentals. Collaborate effectively with Product, Design, and engineering teams to understand and implement feature requirements.. Own feature delivery from planning through production, and mentor interns or junior developers. Qualifications & Skills 1+ years of experience building full-stack web applications. Proficiency in JavaScript (ES6+), TypeScript, HTML5, and CSS3 (Flexbox/Grid). Advanced experience with React (Hooks, Context, Router) or equivalent modern UI framework. Hands-on with state management patterns (Redux, MobX, or custom solutions). Strong backend skills in Node.js (Express/Fastify) or Python (Django/Flask/FastAPI). Expertise in designing REST and/or GraphQL APIs and integrating with backend services. Solid knowledge of MySQL/PostgreSQL and familiarity with NoSQL stores (Elasticsearch, Redis). Experience using build tools (Webpack, Vite), package managers (npm/Yarn), and Git workflows. Skilled in writing and maintaining tests with Jest, React Testing Library, Pytest, and Cypress. Familiar with Docker, CI / CD tools (GitHub Actions, Jenkins), and basic cloud deployments. Product-first thinker with strong problem-solving, debugging, and communication skills. Qualities we'd love to find in you: The attitude to always strive for the best outcomes and an enthusiasm to deliver high quality software Strong collaboration abilities and a flexible & friendly approach to working with teams Strong determination with a constant eye on solutions Creative ideas with problem solving mind-set Be open to receiving objective criticism and improving upon it Eagerness to learn and zeal to grow Strong communication skills is a huge plus Work Location: Hyderabad About NxtWave NxtWave is one of India’s fastest-growing ed-tech startups, revolutionizing the 21st-century job market. NxtWave is transforming youth into highly skilled tech professionals through its CCBP 4.0 programs, regardless of their educational background. NxtWave is founded by Rahul Attuluri (Ex Amazon, IIIT Hyderabad), Sashank Reddy (IIT Bombay) and Anupam Pedarla (IIT Kharagpur). Supported by Orios Ventures, Better Capital, and Marquee Angels, NxtWave raised $33 million in 2023 from Greater Pacific Capital. As an official partner for NSDC (under the Ministry of Skill Development & Entrepreneurship, Govt. of India) and recognized by NASSCOM, NxtWave has earned a reputation for excellence. Some of its prestigious recognitions include: Technology Pioneer 2024 by the World Economic Forum, one of only 100 startups chosen globally ‘Startup Spotlight Award of the Year’ by T-Hub in 2023 ‘Best Tech Skilling EdTech Startup of the Year 2022’ by Times Business Awards ‘The Greatest Brand in Education’ in a research-based listing by URS Media NxtWave Founders Anupam Pedarla and Sashank Gujjula were honoured in the 2024 Forbes India 30 Under 30 for their contributions to tech education NxtWave breaks learning barriers by offering vernacular content for better comprehension and retention. NxtWave now has paid subscribers from 650+ districts across India. Its learners are hired by over 2000+ companies including Amazon, Accenture, IBM, Bank of America, TCS, Deloitte and more. Know more about NxtWave: https://www.ccbp.in Read more about us in the news – Economic Times | CNBC | YourStory | VCCircle
Posted 3 weeks ago
10.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Job description: Purpose of the Job The candidate for Manager – DevOps / Release Management is liable for automating all the manual tasks for developing and deploying code and data to implement continuous deployment and continuous integration frameworks. They are also held responsible for maintaining high availability of production and non-production work environment. This is a hands-on leadership role, requiring the ability to directly contribute to the implementation and optimization of CI/CD pipelines, infrastructure automation, and release management practices. Job Description Design, implement, and manage robust CI/CD pipelines and infrastructure automation frameworks for applications and data services. Oversee and execute release management processes across environments, ensuring governance, repeatability, and quality. Lead the development and maintenance of infrastructure as code and config management tools (e.g., Terraform, Ansible). Improve efficiency of Release Management process, with a focus on quality and minimizing post-production issues. Proactively monitor production and non-production systems to ensure high availability, scalability, and performance. Identify and resolve deployment issues in real time and implement monitoring, alerting, and rollback mechanisms. Collaborate with development, QA, and product teams to support seamless integration and deployment of features. Guide and mentor junior DevOps engineers, instilling DevOps best practices and reusable frameworks. Drive DevSecOps adoption, integrating security checks and compliance into the release lifecycle. Stay current on industry trends and continuously assess tools, frameworks, and approaches that improve operational efficiency. Manage hybrid and cloud-native deployments, with a strong focus on AWS, while supporting Azure and on-prem transitions. Own technical documentation and process design for release pipelines, deployment architecture, and system automation. Help transform the Release Management function from the ground up, including strategy, team development, tooling, and governance. Job Requirements - Experience and Education Bachelor's degree in Computer Science, Engineering, or a related technical field. Minimum of 10+ years of experience in DevOps, Site Reliability Engineering, or Release Management roles. Strong hands-on experience with CI/CD tools such as Jenkins, TeamCity, GitHub Actions, Octopus Deploy, or similar. Proven experience in cloud platforms: AWS (preferred), Azure, or OCI. Skilled in scripting and automation (Python, Shell, Bash) and infrastructure as code (Terraform, CloudFormation). Experience managing release workflows, branching strategies, versioning, and rollbacks. Working knowledge of containerization and orchestration (e.g., Docker, Kubernetes, ECS). Familiarity with monitoring/logging tools (e.g., Datadog, Prometheus, ELK Stack). Strong communication and stakeholder management skills, capable of cross-functional collaboration. Experience working with distributed systems and data platforms (e.g., Elasticsearch, Cassandra, Hadoop) is a plus. Leadership Behaviors Building Outstanding Teams Setting a clear direction Simplification Collaborate & break silos Execution & Accountability Growth mindset Innovation Inclusion External focus Skills
Posted 3 weeks ago
6.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Job Title: DevOps Engineer Location: Gurugram (On-Site) Employment Type: Full-Time Experience: 6 + years Qualification: B.Tech CSE About the Role We are seeking a highly skilled DevOps/MLOps Expert to join our rapidly growing AI-based startup building and deploying cutting-edge enterprise AI/ML solutions. This is a critical role that will shape our infrastructure, deployment pipelines, and scale our ML operations to serve large-scale enterprise clients. As our DevOps/MLOps Expert, you will be responsible for bridging the gap between our AI/ML development teams and production systems, ensuring seamless deployment, monitoring, and scaling of our ML-powered enterprise applications. You’ll work at the intersection of DevOps, Machine Learning, and Data Engineering in a fast-paced startup environment with enterprise-grade requirements. Key Responsibilities MLOps & Model Deployment • Design, implement, and maintain end-to-end ML pipelines from model development to production deployment • Build automated CI/CD pipelines specifically for ML models using tools like MLflow, Kubeflow, and custom solutions • Implement model versioning, experiment tracking, and model registry systems • Monitor model performance, detect drift, and implement automated retraining pipelines • Manage feature stores and data pipelines for real-time and batch inference • Build scalable ML infrastructure for high-volume data processing and analytics Enterprise Cloud Infrastructure & DevOps • Architect and manage cloud-native infrastructure with focus on scalability, security, and compliance • Implement Infrastructure as Code (IaC) using Terraform, CloudFormation, or Pulumi • Design and maintain Kubernetes clusters for containerized ML workloads • Build and optimize Docker containers for ML applications and microservices • Implement comprehensive monitoring, logging, and alerting systems • Manage secrets, security, and enterprise compliance requirements Data Engineering & Real-time Processing • Build and maintain large-scale data pipelines using Apache Airflow, Prefect, or similar tools • Implement real-time data processing and streaming architectures • Design data storage solutions for structured and unstructured data at scale • Implement data validation, quality checks, and lineage tracking • Manage data security, privacy, and enterprise compliance requirements • Optimize data processing for performance and cost efficiency Enterprise Platform Operations • Ensure high availability (99.9%+) and performance of enterprise-grade platforms • Implement auto-scaling solutions for variable ML workloads • Manage multi-tenant architecture and data isolation • Optimize resource utilization and cost management across environments • Implement disaster recovery and backup strategies • Build 24x7 monitoring and alerting systems for mission-critical applications Required Qualifications Experience & Education • 4-8 years of experience in DevOps/MLOps with at least 2+ years focused on enterprise ML systems • Bachelor’s/Master’s degree in Computer Science, Engineering, or related technical field • Proven experience with enterprise-grade platforms or large-scale SaaS applications • Experience with high-compliance environments and enterprise security requirements • Strong background in data-intensive applications and real-time processing systems Technical Skills Core MLOps Technologies • ML Frameworks: TensorFlow, PyTorch, Scikit-learn, Keras, XGBoost • MLOps Tools: MLflow, Kubeflow, Metaflow, DVC, Weights & Biases • Model Serving: TensorFlow Serving, PyTorch TorchServe, Seldon Core, KFServing • Experiment Tracking: MLflow, Neptune.ai, Weights & Biases, Comet DevOps & Cloud Technologies • Cloud Platforms: AWS, Azure, or GCP with relevant certifications • Containerization: Docker, Kubernetes (CKA/CKAD preferred) • CI/CD: Jenkins, GitLab CI, GitHub Actions, CircleCI • IaC: Terraform, CloudFormation, Pulumi, Ansible • Monitoring: Prometheus, Grafana, ELK Stack, Datadog, New Relic Programming & Scripting • Python (advanced) - primary language for ML operations and automation • Bash/Shell scripting for automation and system administration • YAML/JSON for configuration management and APIs • SQL for data operations and analytics • Basic understanding of Go or Java (advantage) Data Technologies • Data Pipeline Tools: Apache Airflow, Prefect, Dagster, Apache NiFi • Streaming & Real-time: Apache Kafka, Apache Spark, Apache Flink, Redis • Databases: PostgreSQL, MongoDB, Elasticsearch, ClickHouse • Data Warehousing: Snowflake, BigQuery, Redshift, Databricks • Data Versioning: DVC, LakeFS, Pachyderm Preferred Qualifications Advanced Technical Skills • Enterprise Security: Experience with enterprise security frameworks, compliance (SOC2, ISO27001) • High-scale Processing: Experience with petabyte-scale data processing and real-time analytics • Performance Optimization: Advanced system optimization, distributed computing, caching strategies • API Development: REST/GraphQL APIs, microservices architecture, API gateways Enterprise & Domain Experience • Previous experience with enterprise clients or B2B SaaS platforms • Experience with compliance-heavy industries (finance, healthcare, government) • Understanding of data privacy regulations (GDPR, SOX, HIPAA) • Experience with multi-tenant enterprise architectures Leadership & Collaboration • Experience mentoring junior engineers and technical team leadership • Strong collaboration with data science teams, product managers, and enterprise clients • Experience with agile methodologies and enterprise project management • Understanding of business metrics, SLAs, and enterprise ROI Growth Opportunities • Career Path: Clear progression to Lead DevOps Engineer or Head of Infrastructure • Technical Growth: Work with cutting-edge enterprise AI/ML technologies • Leadership: Opportunity to build and lead the DevOps/Infrastructure team • Industry Exposure: Work with Government & MNCs enterprise clients and cutting-edge technology stacks Success Metrics & KPIs Technical KPIs • System Uptime: Maintain 99.9%+ availability for enterprise clients • Deployment Frequency: Enable daily deployments with zero downtime • Performance: Ensure optimal response times and system performance • Cost Optimization: Achieve 20-30% annual infrastructure cost reduction • Security: Zero security incidents and full compliance adherence Business Impact • Time to Market: Reduce deployment cycles and improve development velocity • Client Satisfaction: Maintain 95%+ enterprise client satisfaction scores • Team Productivity: Improve engineering team efficiency by 40%+ • Scalability: Support rapid client base growth without infrastructure constraints Why Join Us Be part of a forward-thinking, innovation-driven company with a strong engineering culture. Influence high-impact architectural decisions that shape mission-critical systems. Work with cutting-edge technologies and a passionate team of professionals. Competitive compensation, flexible working environment, and continuous learning opportunities. How to Apply Please submit your resume and a cover letter outlining your relevant experience and how you can contribute to Aaizel Tech Labs’ success. Send your application to hr@aaizeltech.com , bhavik@aaizeltech.com or anju@aaizeltech.com.
Posted 3 weeks ago
6.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Company: Highspot Website: Visit Website Business Type: Small/Medium Business Company Type: Product Business Model: B2B Funding Stage: Series D+ Industry: Business/Productivity Software Job Description About Highspot Highspot is a leading AI-powered sales enablement platform trusted by global enterprises like DocuSign, Siemens, and FedEx etc... We unify content management, sales playbooks, training, coaching, and buyer engagement into a single intelligent system—helping go-to-market teams drive consistent execution and measurable revenue growth. With over $645M in funding from top investors including Tiger Global, ICONIQ, B Capital, and Salesforce Ventures, Highspot is redefining how sales teams perform at scale. About The Role As a Sr. Software Engineer in Search, you will leverage the latest innovation in search / AI technology to improve search relevancy, ranking and recommendations. You will collaborate with key stakeholders (including other product teams and customer-facing teams) to analyze search-related issues and requirements to architect and develop efficient and scalable solutions. You will review, analyze, maintain, and optimize existing search implementations. You will be working with a mix of traditional keyword search techniques, hybrid techniques leveraging embeddings, and other uses of AI technology to optimize results for users. You will rigorously measure and drive the quality of the results. We're looking for highly motivated individuals who work well in a team-oriented, iterative, and fast-paced environment. Responsibilities Develop highly-available distributed services, including improving existing systems. Experiment with different techniques to improve search quality, relevance and experiences. Help design and develop new search features and functionality. Partner cross-functionally with UX and Product Management to create end-to-end experience that customers love Write maintainable and testable code Contribute to internal and external technical documentation Solve problems relating to mission critical services and build automation to prevent problem recurrence Required Qualifications 6+ years of relevant professional experience, not including internships/co-ops Strong understanding of enterprise search and search architecture. Solid experience with search and related technologies such as Solr, ElasticSearch, Lucene. Demonstrable experience with schema design, relevancy tuning, boosting and optimization. Experience working with cloud services, preferably AWS, Azure. Experience with search ranking and machine learning is a big plus Why Join Us? At Highspot , you’ll work on cutting-edge technology that transforms the way sales and marketing teams operate globally. You’ll be part of a high-growth, high-impact culture that values innovation, autonomy, and collaboration. With our strong product-market fit, industry-leading funding, and a passionate team, this is your chance to be part of something big—and grow with it.
Posted 3 weeks ago
0 years
6 - 18 Lacs
India
On-site
We are hiring talented and motivated engineers to join our LLM and Generative AI team. You will contribute across the lifecycle of AI agent development—data engineering, LLM fine-tuning, RAG-based retrieval, evaluation frameworks, and deployment integration. This is an opportunity to work hands-on with open-source models like Llama, integrate them with real-world enterprise systems, and build intelligent, modular Agentic AI applications. Key Responsibilities: LLM Model Engineering: Fine-tune and evaluate large language models (e.g., Llama 2/3, Mistral) using curated datasets for specific enterprise tasks. RAG & Contextual Memory: Build Retrieval-Augmented Generation (RAG) pipelines using vector databases (e.g., ElasticSearch, FAISS) to enhance factual grounding. Data Pipeline Development: Collect, clean, and structure domain-specific datasets (structured and unstructured) for training and evaluation. Agentic Architecture Integration: Contribute to the design and orchestration of AI agents with modular skills for planning, dialogue, and retrieval. Voice & Multimodal Interfaces (Optional): Work with TTS/ASR stacks (e.g., Whisper, NeMo) to integrate GenAI into multilingual voice agent pipelines. Evaluation & Risk Tracking: Implement evaluation metrics for task performance, drift detection, and hallucination control. Collaboration & Review: Work with senior AI engineers, product managers, and domain experts to translate requirements into deployed systems. Continuous Learning: Stay current with advancements in GenAI, open-source ecosystems, and governance practices (e.g., NIST AI RMF, EU AI Act). Qualifications: B.Tech/M.Tech in Computer Science, AI/ML, Data Science, or a related field from a reputed institute. Strong fundamentals in machine learning, NLP, and Python-based development. Familiarity with open-source LLM frameworks (e.g., HuggingFace Transformers, LangChain, LlamaIndex). Exposure to vector search and embeddings (e.g., OpenAI, SBERT, or in-house models). Bonus: Knowledge of Docker, Streamlit, REST APIs, or voice stack (Whisper, NeMo). Why Join Us? Work on real-world GenAI applications deployed across enterprise systems. Contribute to building India's first integrated Agentic AI platform. Be part of a fast-growing team with deep expertise in AI engineering, fine-tuning, and voice/document intelligence. Opportunity to work on multi-modal AI and governance-aligned AI systems from day one. Job Type: Full-time Pay: ₹50,000.00 - ₹150,000.00 per month Benefits: Health insurance Work Location: In person Expected Start Date: 19/08/2025
Posted 3 weeks ago
5.0 years
5 - 10 Lacs
Bengaluru
On-site
DESCRIPTION The AOP (Analytics Operations and Programs) team is responsible for creating core analytics, insight generation and science capabilities for ROW Ops. We develop scalable analytics applications, AI/ML products and research models to optimize operation processes. You will work with Product Managers, Data Engineers, Data Scientists, Research Scientists, Applied Scientists and Business Intelligence Engineers using rigorous quantitative approaches to ensure high quality data/science products for our customers around the world. We are looking for a Sr.Data Scientist to join our growing Science Team. As Data Scientist, you are able to use a range of science methodologies to solve challenging business problems when the solution is unclear. You will be responsible for building ML models to solve complex business problems and test them in production environment. The scope of role includes defining the charter for the project and proposing solutions which align with org's priorities and production constraints but still create impact. You will achieve this by leveraging strong leadership and communication skills, data science skills and by acquiring domain knowledge pertaining to the delivery operations systems. You will provide ML thought leadership to technical and business leaders, and possess ability to think strategically about business, product, and technical challenges. You will also be expected to contribute to the science community by participating in science reviews and publishing in internal or external ML conferences. Our team solves a broad range of problems that can be scaled across ROW (Rest of the World including countries like India, Australia, Singapore, MENA and LATAM). Here is a glimpse of the problems that this team deals with on a regular basis: Using live package and truck signals to adjust truck capacities in real-time HOTW models for Last Mile Channel Allocation Using LLMs to automate analytical processes and insight generation Ops research to optimize middle mile truck routes Working with global partner science teams to affect Reinforcement Learning based pricing models and estimating Shipments Per Route for $MM savings Deep Learning models to synthesize attributes of addresses Abuse detection models to reduce network losses Key job responsibilities 1. Use machine learning and analytical techniques to create scalable solutions for business problems Analyze and extract relevant information from large amounts of Amazon’s historical business data to help automate and optimize key processes 2. Design, develop, evaluate and deploy, innovative and highly scalable ML/OR models 3. Work closely with other science and engineering teams to drive real-time model implementations 4. Work closely with Ops/Product partners to identify problems and propose machine learning solutions 5. Establish scalable, efficient, automated processes for large scale data analyses, model development, model validation and model maintenance 6. Work proactively with engineering teams and product managers to evangelize new algorithms and drive the implementation of large-scale complex ML models in production 7. Leading projects and mentoring other scientists, engineers in the use of ML techniques BASIC QUALIFICATIONS 5+ years of data scientist experience Experience with data scripting languages (e.g. SQL, Python, R etc.) or statistical/mathematical software (e.g. R, SAS, or Matlab) Experience with statistical models e.g. multinomial logistic regression Experience in data applications using large scale distributed systems (e.g., EMR, Spark, Elasticsearch, Hadoop, Pig, and Hive) Experience working with data engineers and business intelligence engineers collaboratively Demonstrated expertise in a wide range of ML techniques PREFERRED QUALIFICATIONS Experience as a leader and mentor on a data science team Master's degree in a quantitative field such as statistics, mathematics, data science, business analytics, economics, finance, engineering, or computer science Expertise in Reinforcement Learning and Gen AI is preferred Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner.
Posted 3 weeks ago
4.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Company Description About CyberArk : CyberArk (NASDAQ: CYBR), is the global leader in Identity Security. Centered on privileged access management, CyberArk provides the most comprehensive security offering for any identity – human or machine – across business applications, distributed workforces, hybrid cloud workloads and throughout the DevOps lifecycle. The world’s leading organizations trust CyberArk to help secure their most critical assets. To learn more about CyberArk, visit our CyberArk blogs or follow us on X, LinkedIn or Facebook. Job Description CyberArk is seeking a SRE Cloud Engineering Architect looking to bring their knowledge, excitement, and energy to the team. If you have worked in the cloud solving scale problems, bringing visibility into your platform and accomplishing true CI/CD pipelines we want you on the team! Driven and excited to innovate is what we need all while allowing you to grow professionally and creating strong relationships that will last a lifetime. Responsibilities: Design Implementation of AWS infrastructure components such as VPCs, EC2, EKS, S3, tagging schemes, CloudFormation, etc. Architecture of deployment and management automation of cloud-based infrastructure and software Architecting the use of configuration management tools in both Windows and Linux - Cloudformation, Helm, Terraform, Salt, Ansible Ensuring cloud-based architectures meet availability and recoverability requirements Architecture and implementation of cloud-based monitoring, alerting and reporting – Datadog, Logz.io, InfluxDb, CloudWatch, Catchpoint, ELK, Grafana Support and guidance on tooling that helps to enable teams for greater output and reliability. Deep understanding of the latest tech solutions, trends, and ability to dive into the details of the architecture as needed. Work with the Team Leads within the group to identify areas of improvement, prepare architecture road maps, and advocate to the Product Management group. Qualifications B.S./B.E. in Computer Science or equivalent experience Minimum 4 years of experience managing AWS infrastructure Minimum of 10 years of experience with systems engineering and software development Expert understanding/experience of containerization services such as Docker/Kubernetes Expert in tools such as Datadog, InfluxDb, Grafana, Logstash, Elasticsearch Solid understanding/experience of web services, databases and relating infrastructure/architectures Solid understanding of backup/restore best practices Strong level of expertise programming in C# / C++ / Java / Python or equivalent language Excellent Troubleshooting Skills Experience supporting an enterprise-level SaaS environment Security Experience a plus
Posted 3 weeks ago
2.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
About jhana : jhana is an early stage, seed-funded homegrown artificial legal intelligence lab, established at Harvard in February 2022 and made in India. jhana builds datasets, agents, and interfaces that help produce legal work faster and with higher fidelity. jhana's suite of product offerings use AI to streamline workflows across the legal field including litigation, corporate law and in-house counsel, and the judiciary. jhana was recently declared the most outstanding new legal tech in Asia and Oceania by ALITA, the Asia-Pacific Legal Innovation & Technology Association. About the role : This role will be part of an agile full-stack engineering team, responsible for the maintenance, stability, and improvement of the entire jhana.ai service. This teams primary tasks will include : integrating new product offerings; rehauling UI to optimize user experience; expanding the offered databases; closing all bug reports in a timely manner; maintaining the server and the microservices that support it; and more. All members of this team will work across diverse repositories and technologies in multiple languages as necessary. The day-to-day : This is an intermediate role distributed across research, engineering, and product. Relatedly, it is a combination of invention and optimization. These are the problem statements that this role will likely continue or begin our work on Develop, test, and maintain full-stack applications, with knowledge of Python and JS/TS-based frameworks appreciated. Low-level understanding of relational databases and SQL or similar technologies. Implementing scale and optimization for billion-scale ElasticSearch, and experience working with large ElasticSearch systems. Understanding of deploying and maintaining containerized and concurrent pipes on AWS with familiarity and experience using tools like Docker and Kubernetes. Collaborate closely with UX/UI designers to implement MaterialUI-based designs and ensure a seamless user experience. Contribute to building and enhancing RAG (Retrieval Augmented Generation) pipelines and search stacks. Continuously optimize applications for performance, scalability, and security. Participate in code reviews, troubleshooting, and debugging to ensure high-quality software delivery. Skills & Qualifications : Required Skills : Experience: Looking for 2+ years of experience Frontend : Proficiency in JavaScript or Typescript. Backend : Strong experience with API development and RESTful services, ideally with substantial experience in Django/Python. Databases : Solid knowledge of SQL and database design. Cloud : Hands-on experience with AWS for deploying and managing cloud-based applications. UI/UX : Experience with MaterialUI and collaborating on responsive, user-centric designs. Added Bonuses Containerization : Proficiency with Docker for containerized development and deployment. Search Technologies : Familiarity with ElasticSearch and implementing search-based features. LLM Knowledge : Understanding and hands-on experience working with Large Language Models (LLMs) in applications. Vector Databases : Experience with vector databases, especially FAISS or Milvus, and how they integrate with machine learning systems. Search Stacks & Agents : Previous experience building search stacks, agents, or intelligent information retrieval systems. RAG Pipelines : Knowledge of building and enhancing Retrieval Augmented Generation (RAG) pipelines. Design Tools : Experience creating designs or wireframes using Figma or other design tools. About the Team : We are a public benefit corporation headquartered in Bangalore. We operate in rapidly changing legal systems with awareness of the stakes at hand. Our intention is to in?uence beneficence and alignment into the technological systems that are augmenting and replacing human institutions. Our team spans diverse identity and training, from physics and mathematics to law and public policy. We are small, fast-moving, horizontally at and built on collaboration between lawyers, academics, and engineers. We ship fast, and every line of code our team writes has a >0.9 expectation of making it to production. What we offer : Competitive salary and benefits package. Opportunities for growth and professional development. ESOPs and ownership. High potential for vertical and horizontal growth. Collaborative and dynamic work environment. The chance to work with cutting-edge technologies and make a real impact on a high-stakes industry with a transformative impact on human lives and commercial economies.
Posted 3 weeks ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
57101 Jobs | Dublin
Wipro
24505 Jobs | Bengaluru
Accenture in India
19467 Jobs | Dublin 2
EY
17463 Jobs | London
Uplers
12745 Jobs | Ahmedabad
IBM
12087 Jobs | Armonk
Bajaj Finserv
11514 Jobs |
Amazon
11498 Jobs | Seattle,WA
Accenture services Pvt Ltd
10993 Jobs |
Oracle
10696 Jobs | Redwood City