Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
3.0 - 7.0 years
0 Lacs
delhi
On-site
As a Software Engineer at Bain & Company, you will be responsible for delivering application modules with minimal supervision, providing guidance to associate engineers, and collaborating with an Agile/scrum software development team. Your primary focus will be on building and supporting Bain's strategic internal software systems to deliver value to global users and support key business initiatives. Your responsibilities will include working on technical delivery tasks such as developing and updating enterprise applications, analyzing user stories, preparing work estimates, writing unit test plans, participating in testing and implementation of application releases, and providing ongoing support for applications in use. Additionally, you will contribute to research efforts to evaluate new technologies and tools, share knowledge within the team, and present technical findings and recommendations. You should have expertise in frameworks like .NET & .NET Core, languages such as C# and T-SQL, web frameworks/libraries like Angular/React, JavaScript, HTML, CSS, Bootstrap, RDBMS like Microsoft SQL Server, cloud services like Microsoft Azure, unit testing tools like XUnit and Jasmine, and DevOps practices with GitActions. Familiarity with search engines, NoSQL databases, caching mechanisms, and preferred skills in Python & GenAI will be beneficial. To qualify for this role, you should hold a Bachelor's degree or equivalent, have 3-5 years of experience in developing enterprise-scale applications, possess a strong understanding of agile software development methodologies, and demonstrate excellent communication, customer service, analytic, and problem-solving skills. Your ability to work collaboratively with the team, acquire new skills, and contribute to continuous improvement efforts will be essential for success in this position.,
Posted 1 week ago
2.0 - 10.0 years
0 Lacs
coimbatore, tamil nadu
On-site
The Technical Lead is responsible for leading a team of engineers in the design, implementation, maintenance, and troubleshooting of Linux-based systems. This role requires a deep understanding of Linux systems, network architecture, and software development processes. You will drive innovation, ensure system stability, and lead the team in delivering high-quality infrastructure solutions that align with the organization's goals. Lead and mentor a team of Linux engineers, providing technical guidance and fostering professional growth. Manage workload distribution, ensuring that projects and tasks are completed on time and within scope. Collaborate with cross-functional teams to align IT infrastructure with organizational objectives. You will also be responsible for SLA & ITIL, Inventory Management. Architect, deploy, and manage robust Linux-based environments, including servers, networking, and storage solutions. Ensure the scalability, reliability, and security of Linux systems. Oversee the automation of system deployment and management processes using tools such as Ansible, Puppet, or Chef. Additionally, you will handle database management for MySQL, MongoDB, Elasticsearch, and Postgres. Lead efforts in monitoring, maintaining, and optimizing system performance. Proactively identify potential issues and implement solutions to prevent system outages. Resolve complex technical problems escalated from the support team. Implement and maintain security best practices for Linux systems, including patch management, firewall configuration, and access controls. Ensure compliance with relevant industry standards and regulations (e.g., HIPAA, GDPR, PCI-DSS). Develop and maintain comprehensive documentation of systems, processes, and procedures. Prepare and present regular reports on system performance, incidents, and improvement initiatives to senior management. Stay up-to-date with the latest Linux technologies, tools, and practices. Lead initiatives to improve the efficiency, reliability, and security of Linux environments. Drive innovation in infrastructure management, including the adoption of cloud technologies and containerization. Required Qualifications: - Bachelors degree in Computer Science, Information Technology, or a related field, or equivalent work experience. - 10+ years of experience in Linux system administration, with at least 2 years in a leadership or senior technical role. - Deep understanding of Linux operating systems (RHEL, CentOS, Ubuntu) and associated technologies. - Strong knowledge of networking principles, including TCP/IP, DNS, and firewalls. - Experience with automation and configuration management tools (e.g., Ansible, Puppet, Chef). - Proficiency in scripting languages (e.g., Bash, Python). - Experience with virtualization (e.g., VMware, KVM) and containerization (e.g., Docker, Kubernetes). - Familiarity with cloud platforms (e.g., AWS, Azure, GCP) and hybrid cloud environments. - Excellent problem-solving skills and the ability to work under pressure. - Strong communication and interpersonal skills. Job Types: Full-time, Permanent Benefits: - Health insurance - Provident Fund Schedule: Day shift Yearly bonus Work Location: In person,
Posted 1 week ago
18.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Optum is a global organization that delivers care, aided by technology to help millions of people live healthier lives. The work you do with our team will directly improve health outcomes by connecting people with the care, pharmacy benefits, data and resources they need to feel their best. Here, you will find a culture guided by inclusion, talented peers, comprehensive benefits and career development opportunities. Come make an impact on the communities we serve as you help us advance health optimization on a global scale. Join us to start Caring. Connecting. Growing together. Software engineering is the application of engineering to the design, development, implementation, testing and maintenance of software in a systematic method. The roles in this function will cover all primary development activity across all technology functions that ensure we deliver code with high quality for our applications, products and services and to understand customer needs and to develop product roadmaps. These roles include, but are not limited to analysis, design, coding, engineering, testing, debugging, standards, methods, tools analysis, documentation, research and development, maintenance, new development, operations and delivery. With every role in the company, each position has a requirement for building quality into every output. This also includes evaluating new tools, new techniques, strategies; Automation of common tasks; build of common utilities to drive organizational efficiency with a passion around technology and solutions and influence of thought and leadership on future capabilities and opportunities to apply technology in new and innovative ways. Primary Responsibilities Help define Platform roadmap for the enterprise, multi-region cloud strategy and own the services and capability delivery end to end Work very closely with various business stakeholders to drive the execution of multiple business initiatives and technologies Set short- and long-term vision, strategy, structure and direction for platform organization Partners with all Business and Product Leaders to develop new product features and upgrade existing Identity Platform product and processes; helps define product and project deliverables, budgets, schedules, and testing, launch and release plans Define Digital Identity Strategy for migration and reengineering existing products & business processes Build, develop and guide high-performing talent for this platform team. Define a long-term talent strategy cutting across domains and technology Manage web scale systems to demanding availability targets (99.999%+) Stay abreast of leading-edge technologies in the industry, evaluating emerging technologies and evangelizing their adoption Manages an Agile (Scrum) Development process in a continuous integration and deployment methodology Empower software delivery teams to rapidly deliver software through the use of automation and “everything-as-code” best practices Adapting to and remaining effective in a changing environment Hands-on approach to better understand the technical challenges faced by the Team; guide the team in technical solutions Oversee the planning & technical direction of various development tracks Drive architectural initiatives that align our business needs and technical capabilities for Identity Management solutions Represent Optum at various forums Provides leadership to and is accountable for the performance and results through multiple layers of management and senior level professional staff Impact of work is most often at the regional (e.g. multi-state) level, or is responsible for a major portion of a business segment, functional area or line of business Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so Required Qualifications B.E. / B.Tech (Computer Science) from reputed college, Master’s degree preferable and total industry experience of 18+ years 5+ years of experience leading a team of Software Engineers 4+ years of experience working in a DevOps model 2+ years of experience in managing and using more than one Public Cloud Platform (GCP, AWS, Azure, Oracle Cloud) Experience with DevOps engineering tools such as Jenkins, Terraform, etc. Experience in a fast paced, Agile, continuous integration environment Experience in Systems Monitoring, Alerting and Analytics Experience in Log aggregation reporting tools, such as: Elasticsearch, Splunk Experience working in an off-shore / on-shore model Hands-on experience in distributed application development in AWS public cloud environment following well-architected principles with the goal of 99.999% availability Cloud-Native Development experience - developing and deploying microservices to Kubernetes Solid programming experience with Java Experience leading development using modern software engineering and product development tools including Agile, Continuous Integration, Continuous Delivery, DevOps etc. Experience of developing highly resilient and scalable cloud native and cloud independent applications Experience in developing multi-tenant SaaS based applications Demonstrable experience leading international delivery and engineering teams Excellent knowledge of distributed computing Proven solid technical skills in data structures, algorithms, system design, coding best practices, build-release procedures Proven good communicator at all levels of the organization Proven extremely hands-on technically and should have a deep passion and curiosity for technology Proven exceptional communication skills and the demonstrable ability to communicate appropriately at all levels of the organization including senior technology and business leaders Proven self-driven and a strategic leader who continuously raises the bar for self and the team Preferred Qualifications Certification in a cloud platform (AWS, GCP, Azure) Experience architecting, delivering, and operating large scale, highly available system Experience in Healthcare Industry experience Experience in complex projects with division or company-wide scope Experience with information security threat modelling and risk analysis Knowledge of implementation of Technology specifications and/or RFCs Familiarity with IT standards and best practices, audit, security and compliance (ex: ITIL, ITSM, SOC2, HIPAA, HITRUST CSF) Proven success delivering products/services in a high-growth environment, exhibiting solid ability to identify and solve ambiguous customer-focused problems Proven success in hiring and developing highly effective software engineering teams in a global team environment Proven high attention to detail with proven ability to juggle multiple, competing priorities simultaneously and make things happen in a fast-paced, dynamic environment At UnitedHealth Group, our mission is to help people live healthier lives and make the health system work better for everyone. We believe everyone - of every race, gender, sexuality, age, location and income - deserves the opportunity to live their healthiest life. Today, however, there are still far too many barriers to good health which are disproportionately experienced by people of color, historically marginalized groups and those with lower incomes. We are committed to mitigating our impact on the environment and enabling and delivering equitable care that addresses health disparities and improves health outcomes - an enterprise priority reflected in our mission.
Posted 1 week ago
5.0 - 9.0 years
0 Lacs
pune, maharashtra
On-site
Are you passionate about inspiring change, building data-driven tools to improve software quality, and ensuring customers have the best experience If so, we have a phenomenal opportunity for you! NVIDIA is seeking a creative, and hands-on software engineer with a test-to-failure approach who is a quick learner, can understand software and hardware specifications, build reliable tests and tools in C++/C#/Python to improve quality and accelerate the delivery of NVIDIA products. As a Software Automation and Tools Engineer, you will take part in the technical design and implementation of tests for NVIDIA software products with the goal of identifying defects early in the software development lifecycle. You will also build tools that accelerate execution workflows for the organization. In this role, you can expect to develop automated end-to-end tests for NVIDIA device drivers and SDKs on the Windows platform, execute automated tests, identify and report defects, measure code coverage, analyze and drive code coverage improvements, develop applications and tools that bring data-driven insights to development and test workflows, build tools/utility/framework in Python/C/C++ to automate and optimize testing workflows in the GPU domain, write maintainable, reliable, and well-detailed code, debug issues to identify the root cause, provide peer code reviews including feedback on performance, scalability, and correctness, optimally estimate and prioritize tasks to create a realistic delivery schedule, generate and test compatibility across a range of products and interfaces, work closely with leadership to report progress by generating effective and impactful reports. You will have the opportunity to work on challenging technical and process issues. What we need to see: - B.E./B. Tech degree in Computer Science/IT/Electronics engineering with strong academics or equivalent experience - 5+ years of programming experience in Python/C/C++ with experience in applying Object-Oriented Programming concepts - Hands-on knowledge of developing Python scripts with application development concepts like dictionaries, tuples, RegEx, PIP, etc. - Good experience with using AI development tools for test plans creation, test cases development, and test case automation - Experience with testing RESTful APIs and the ability to conduct performance and load testing to ensure the application can handle high traffic and usage - Experience working with databases and storage technologies like SQL and Elasticsearch - Good understanding of OS fundamentals, PC Hardware, and troubleshooting - The ability to collaborate with multiple development teams to gain knowledge and improve test code coverage - Excellent written and verbal communication skills and excellent analytical and problem-solving skills - The ability to work with a team of engineers in a fast-paced environment Ways to stand out from the crowd: - Prior project experience with building ML and DL based applications would be a plus - Good understanding of testing fundamentals - Good problem-solving skills (solid logic to apply in isolation and regression of issues found),
Posted 1 week ago
6.0 years
0 Lacs
Pune, Maharashtra, India
On-site
About TripleLift We're TripleLift, an advertising platform on a mission to elevate digital advertising through beautiful creative, quality publishers, actionable data and smart targeting. Through over 1 trillion monthly ad transactions, we help publishers and platforms monetize their businesses. Our technology is where the world's leading brands find audiences across online video, connected television, display and native ads. Brand and enterprise customers choose us because of our innovative solutions, premium formats, and supportive experts dedicated to maximizing their performance. As part of the Vista Equity Partners portfolio, we are NMSDC certified, qualify for diverse spending goals and are committed to economic inclusion. Find out how TripleLift raises up the programmatic ecosystem at triplelift.com. The Role TripleLift is seeking a Senior Data Engineer to join a small, influential Data Engineering team. You will be responsible for expanding and optimizing our high-volume, low-latency data platform architecture, as well as optimizing data flow and collection for cross functional teams. The ideal candidate is an experienced data pipeline engineer and data wrangler who enjoys optimizing data systems and building them from the ground up. This role will support our software engineers, product managers, business intelligence analysts and data scientists on data initiatives, and will ensure optimal data delivery architecture is applied consistently throughout new and ongoing projects. Ideal candidates must be self-directed and comfortable supporting the data needs of multiple teams, systems and products. The right candidate will be excited by the prospect of optimizing or even re-designing our company’s data architecture to support our next generation of products and data initiatives. Responsibilities Create and maintain optimal, high-throughput data platform architecture handling 100’s of billions of daily events. Explore, refine and assemble large, complex data sets that meet functional product and business requirements. Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc. Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using Spark, EMR, Snowpark, Kafka and other big data technologies. Work with stakeholders across geo-distributed teams, including product managers, engineers and analysts to assist with data-related technical issues and support their data infrastructure needs. Digest and communicate business requirements effectively to both technical and non-technical audiences. Translate business requirements into concise technical specifications. Qualifications 6+ years of experience in a Data Engineer role Bachelors Degree, or higher, in Computer Science or related Engineering field Experience building and optimizing ‘big data’ data pipelines, architectures and data sets Expert working knowledge of Databricks/Spark and associated APIs Strong experience with object-oriented and functional scripting languages: Python, Java, Scala and associated toolchain Experience working with relational databases, SQL authoring/optimizing as well as operational familiarity with a variety of databases. Experience with AWS cloud services: EC2, EMR, RDS Experience working with NoSQL data stores such as: Elasticsearch, Apache Druid Experience with data pipeline and workflow management tools: Airflow Experience performing root cause analysis on internal and external data and processes to answer specific business questions and identify opportunities for improvement. Strong experience working with unstructured and simi-structured data formats: JSON, Parquet, Iceberg, Avro, Protobuf Expert knowledge of processes supporting data transformation, data structures, metadata, dependency and workload management. Proven experience in manipulating, processing, and extracting value from large, disparate datasets. Working knowledge of streams processing, message queuing, and highly scalable ‘big data’ data stores. Experience supporting and working with cross-functional teams in a dynamic environment. Preferred Streaming systems experience with Kafka, Spark Streaming, Kafka Streams Snowflake/Snowpark DBT Exposure to AdTech Life at TripleLift At TripleLift, we’re a team of great people who like who they work with and want to make everyone around them better. This means being positive, collaborative, and compassionate. We hustle harder than the competition and are continuously innovating. Learn more about TripleLift and our culture by visiting our LinkedIn Life page. Establishing People, Culture and Community Initiatives At TripleLift, we are committed to building a culture where people feel connected, supported, and empowered to do their best work. We invest in our people and foster a workplace that encourages curiosity, celebrates shared values, and promotes meaningful connections across teams and communities. We want to ensure the best talent of every background, viewpoint, and experience has an opportunity to be hired, belong, and develop at TripleLift. Through our People, Culture, and Community initiatives, we aim to create an environment where everyone can thrive and feel a true sense of belonging. Privacy Policy Please see our Privacy Policies on our TripleLift and 1plusX websites. TripleLift does not accept unsolicited resumes from any type of recruitment search firm. Any resume submitted in the absence of a signed agreement will become the property of TripleLift and no fee shall be due.
Posted 1 week ago
2.0 years
0 Lacs
Pune, Maharashtra, India
On-site
About TripleLift We're TripleLift, an advertising platform on a mission to elevate digital advertising through beautiful creative, quality publishers, actionable data and smart targeting. Through over 1 trillion monthly ad transactions, we help publishers and platforms monetize their businesses. Our technology is where the world's leading brands find audiences across online video, connected television, display and native ads. Brand and enterprise customers choose us because of our innovative solutions, premium formats, and supportive experts dedicated to maximizing their performance. As part of the Vista Equity Partners portfolio, we are NMSDC certified, qualify for diverse spending goals and are committed to economic inclusion. Find out how TripleLift raises up the programmatic ecosystem at triplelift.com. The Role TripleLift is seeking a Data Engineer II to join a small, influential Data Engineering team. You will be responsible for evolving and optimizing our high-volume, low-latency data pipeline architecture, as well as optimizing data flow and collection for cross functional teams. The ideal candidate is an experienced data pipeline builder and data wrangler who enjoys optimizing data systems and building them from the ground up. In this role, you will support our software engineers, product managers, business intelligence analysts, and data scientists on data initiatives. You will also ensure optimal data delivery architecture is applied consistently across all new and ongoing projects. Ideal candidates will be self-starters who can efficiently meet the data needs of various teams, systems, and products. The right candidate will be excited by the prospect of optimizing or even re-designing our company’s data architecture to support our next generation of products and data initiatives. Responsibilities Create and maintain optimal, high-throughput data platform architecture handling 100’s of billions of daily events. Explore, refine and assemble large, complex data sets that meet functional product and business requirements. Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc. Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using Spark, EMR, Snowpark, Kafka and other big data technologies Work with stakeholders across geo-distributed teams, including product managers, engineers and analysts to assist with data-related technical issues and support their data infrastructure needs. Digest and communicate business requirements effectively to both technical and non-technical audiences. Qualifications 2+ years of experience in a Data Engineer role Bachelors Degree, or higher, in Computer Science or related Engineering field Experience building and optimizing ‘big data’ data pipelines, architectures and data sets Strong working knowledge of Databricks/Spark and associated APIs Experience with object-oriented and functional scripting languages: Python, Java, Scala and associated toolchain Experience working with relational databases, SQL authoring/optimizing as well as operational familiarity with a variety of databases. Experience with AWS cloud services: EC2, EMR, RDS Experience working with NoSQL data stores such as: Elasticsearch, Apache Druid Experience with data pipeline and workflow management tools: Airflow Experience performing root cause analysis on internal and external data and processes to answer specific business questions and identify opportunities for improvement Strong experience working with unstructured and simi-structured data formats: JSON, Parquet, Iceberg, Avro, Protobuf Expert knowledge of processes supporting data transformation, data structures, metadata, dependency and workload management. Proven experience in manipulating, processing, and extracting value from large, disparate datasets. Working knowledge of streams processing, message queuing, and highly scalable ‘big data’ data stores. Experience supporting and working with cross-functional teams in a dynamic environment. Preferred Streaming systems experience with Kafka, Spark Streaming, Kafka Streams Snowflake/Snowpark DBT Exposure to AdTech Life at TripleLift At TripleLift, we’re a team of great people who like who they work with and want to make everyone around them better. This means being positive, collaborative, and compassionate. We hustle harder than the competition and are continuously innovating. Learn more about TripleLift and our culture by visiting our LinkedIn Life page. Establishing People, Culture and Community Initiatives At TripleLift, we are committed to building a culture where people feel connected, supported, and empowered to do their best work. We invest in our people and foster a workplace that encourages curiosity, celebrates shared values, and promotes meaningful connections across teams and communities. We want to ensure the best talent of every background, viewpoint, and experience has an opportunity to be hired, belong, and develop at TripleLift. Through our People, Culture, and Community initiatives, we aim to create an environment where everyone can thrive and feel a true sense of belonging. Privacy Policy Please see our Privacy Policies on our TripleLift and 1plusX websites. TripleLift does not accept unsolicited resumes from any type of recruitment search firm. Any resume submitted in the absence of a signed agreement will become the property of TripleLift and no fee shall be due.
Posted 1 week ago
2.0 - 6.0 years
0 Lacs
karnataka
On-site
As a developer, you will bring ideas to life by relentlessly improving performance, scalability, and maintainability of the development process. You will drive and adhere to software development best practices while continuously seeking to improve and maintain software. Collaborating closely with internal operations teams, you will empower them with technology solutions. Your responsibilities will include automating tasks wherever possible and configuring everything as code. Additionally, you will estimate and manage feature deliveries in a predictable manner, driving discussions to enhance products, processes, and technologies. Incremental changes to the architecture will be made after conducting impact analyses. With 1.5-4 years of experience in product development and architecture, you possess strong fundamentals in Computer Science, especially in Networking, Databases, and Operating Systems. Being organized and self-sufficient with attention to detail, you excel in both front-end and back-end development, preferably as a full-stack developer. Your understanding of MVC frameworks like Rails, Angular, Django, and React, along with familiarity with micro-service architecture and test-driven development, will be essential in this role. Proficiency in *nix systems, AWS/Kubernetes, and SQL skills, particularly with PostgreSQL, will be advantageous. Previous experience mentoring developers is a plus. Additional experience with NewRelic, Kafka, ElasticSearch, RPC, SOA, Event-driven systems, Message Buses, and designing services/applications from scratch will be highly valued in this position.,
Posted 1 week ago
3.0 - 7.0 years
0 Lacs
haryana
On-site
As a Software Engineer at Bain & Company, you will be responsible for delivering application modules with minimal supervision, guiding entry-level engineers, and collaborating with an Agile software development team. You will work on building and supporting Bain's strategic internal software systems, focusing on delivering value to global users and supporting key business initiatives. Your role will involve developing enterprise-scale browser-based or mobile applications using current Microsoft development languages and technologies. Your primary responsibilities and duties will include: Technical Delivery (80%): - Collaborating with teams on enterprise applications - Participating in Agile team events and activities - Identifying technical steps required for story completion - Working with senior team members to evaluate product backlog items - Demonstrating business and domain knowledge to achieve outcomes - Analyzing user stories, performing task breakdown, and completing committed tasks - Understanding and using infrastructure to develop features - Following application design and architecture standards - Writing unit test plans and executing tests - Testing and implementing application releases - Providing ongoing support for applications in use - Acquiring new skills through training to be a T-Shaped team member - Contributing to sprint retrospective for team improvement - Following Bain development project process and standards - Writing technical documentation as required Research (10%): - Evaluating and employing new technologies for software applications - Researching and evaluating tools and technologies for future initiatives - Sharing concepts and technologies with the software development team Communication (10%): - Presenting technical findings and recommendations to the team - Communicating impediments clearly and ensuring understanding of completion criteria - Providing input during sprint retrospective for team improvement You should have knowledge and experience in frameworks such as .NET & .NET Core, languages like C# and T-SQL, web frameworks/libraries including Angular/React, JavaScript, HTML, CSS, Bootstrap, RDBMS like Microsoft SQL Server, cloud services such as Microsoft Azure, unit testing tools like XUnit and Jasmine, DevOps tools like GitActions, and more. Preferred skills include Python & GenAI. Qualifications: - Bachelor's or equivalent degree - 3-5 years of experience in software development - Experience in developing enterprise-scale applications - Strong knowledge of agile software development methodologies - Excellent communication, customer service, analytic, and problem-solving skills - Demonstrated T-shaped behavior to expedite delivery and manage conflicts/contingencies.,
Posted 1 week ago
2.0 - 6.0 years
0 Lacs
pune, maharashtra
On-site
As a Software Engineer II - Backend - Data Engineering at Blueshift, a venture-funded startup headquartered in San Francisco and expanding its team in Pune, India, you will play a crucial role in contributing to the development and maintenance of high-throughput, low-latency data pipelines. These pipelines are responsible for ingesting millions of user, product, and event data into our system in real-time and batch modes. In this position, you will collaborate closely with experienced engineers to create scalable, fault-tolerant infrastructure that supports our core personalization and decision-making systems. Your responsibilities will include integrating with external systems and data warehouses to ensure reliable data ingestion and export. You will have the opportunity to take ownership of key components and features, from design to deployment, and enhance your skills in system design, performance optimization, and production monitoring. Working with modern technologies like Rust, Elixir, and Ruby on Rails, you will gain hands-on experience in building backend systems that operate at scale. Key Responsibilities: - Design, implement, and maintain data ingestion pipelines for real-time and batch processing workloads. - Build and improve services for moving and transforming large volumes of data efficiently. - Develop new microservices and components to enhance data integration and platform scalability. - Integrate with third-party data warehouses and external systems for importing or exporting data. - Contribute to enhancing the performance, reliability, and observability of existing data systems. - Assist in diagnosing and resolving data-related issues reported by internal teams or customers. - Engage in code reviews, knowledge sharing, and continuous improvement of engineering practices within the team. Requirements: - Bachelors/Masters in Computer Science or related fields. - 2+ years of experience as a backend engineer. - Solid understanding of CS concepts, OOP, data structures, and concurrency. - Proficiency in relational databases and SQL. - Attention to detail, curiosity, proactiveness, willingness to learn, and a sense of ownership. - Good communication and coordination skills. - Experience with public clouds like AWS/Azure/GCP and modern languages like Elixir/Rust/Golang is advantageous. - Experience with NoSQL systems such as Cassandra/ScyllaDB, ElasticSearch, and REDIS is a plus. Perks and Benefits: - Competitive salary and stock option grants. - Comprehensive hospitalization, personal accident, and term insurance coverage. - Conveniently located in Baner, one of the best neighborhoods for tech startups. - Daily catered breakfast, lunch, snacks, and a well-stocked pantry. - Supportive team environment that values your well-being and growth opportunities.,
Posted 1 week ago
15.0 - 19.0 years
0 Lacs
chennai, tamil nadu
On-site
SquareShift is a Chennai-based, high-growth software services firm with a global presence in the US and Singapore. Established in 2019, we specialize in providing AI-led cloud and data solutions, including Google Cloud Platform consulting, Elasticsearch-based observability, data engineering, machine learning, and secure, scalable product engineering across various industries such as banking, retail, and hi-tech. As an official Google Cloud Partner and Elastic Partner, we assist enterprises in modernizing, innovating, and scaling through multi-cloud architectures, AI-powered platforms, and cutting-edge R&D initiatives. Our Innovation & Research Lab rapidly develops proofs-of-concept (POCs) in AI, analytics, and emerging technologies to drive business value with speed and precision. We are on a mission to become the most trusted partner in enterprise cloud adoption by solving complex challenges through innovation, execution excellence, and a culture that emphasizes learning, action, quality, and a strong sense of belonging. The Vice President / Head of Engineering at SquareShift will lead our global engineering organization from Chennai, defining the technology vision, driving innovation, and scaling high-performance teams. This role combines strategic leadership, deep technical expertise, and hands-on engagement with customers and partners to deliver cutting-edge, cloud-enabled, AI-driven solutions. Key Responsibilities: **Technology & Strategy** - Define and execute the engineering strategy aligned with corporate goals and evolving market trends. - Lead multi-stack engineering delivery across web, mobile, backend, AI/ML, and cloud platforms. - Oversee cloud-native architectures with a strong focus on Google Cloud Platform and ensure integration with AWS/Azure as necessary. **Leadership & Execution** - Build, mentor, and manage a world-class engineering team in Chennai and other global locations. - Drive agile delivery excellence, ensuring projects are delivered on time, within budget, and to world-class standards. - Promote DevOps, CI/CD, microservices, and secure-by-design development practices. **Innovation & R&D** - Lead the Proof of Concept (POC) & Research Department to explore and validate AI, ElasticSearch, data engineering, and emerging technologies. - Collaborate with Google Cloud and Elastics engineering teams on joint innovations. - Rapidly prototype, test, and deploy solutions for enterprise digital transformation. Requirements: **Client & Business Engagement** - Partner with sales, pre-sales, and delivery leadership to shape technical proposals and architectures. - Engage with C-level executives of client organizations to define and deliver technology roadmaps. - Own engineering resource planning, budget control, and capacity scaling. Required Skills & Qualifications: - 15+ years in software engineering, including 7+ years in senior leadership roles. - Proven success in leading multi-stack, multi-cloud teams using various technologies. - Hands-on expertise in Google Cloud Platform, ElasticSearch, and AI/ML systems. - Experience in POC and rapid prototyping environments. - Strong background in security, compliance, and enterprise architecture. - Excellent leadership, stakeholder management, and communication skills. Preferred Attributes: - Experience in partner ecosystem collaboration. - History of building and scaling R&D teams in India. - Exposure to digital transformation projects for enterprise clients. - Innovative, entrepreneurial, and technology-forward mindset. Why Join Us - Lead the engineering vision of a global GCP & Elastic partner from Chennai. - Directly influence AI, cloud, and search innovation for Fortune 500 clients. - Build and shape world-class engineering teams and research labs. - Work in a culture that celebrates innovation, speed, and excellence.,
Posted 1 week ago
5.0 - 9.0 years
0 Lacs
maharashtra
On-site
At Crimson Enago, we are dedicated to developing AI-powered tools and services that enhance the productivity of researchers and professionals. We understand that the stages of knowledge discovery, acquisition, creation, and dissemination can be cognitively demanding and interconnected. This is why our flagship products, Trinka and RAx, have been designed to streamline and accelerate these processes. Trinka, available at www.trinka.ai, is an AI-powered English grammar checker and language enhancement writing assistant specifically tailored for academic and technical writing. Developed by linguists, scientists, and language enthusiasts, Trinka is capable of identifying and correcting numerous intricate writing errors, ensuring your content is error-free. It goes beyond basic grammar correction by addressing contextual spelling mistakes, advanced grammar errors, enhancing vocabulary usage, and providing real-time writing suggestions. With subject-specific correction features, Trinka ensures that your writing is professional, concise, and engaging. Moreover, Trinka's Enterprise solutions offer unlimited access and customizable options to leverage its full capabilities. RAx, the first smart workspace available at https://raxter.io, is designed to assist researchers (including students, professors, and corporate researchers) in optimizing their research projects. Powered by proprietary AI algorithms and innovative problem-solving approaches, RAx aims to become the go-to workspace for research-intensive projects. By bridging information sources such as research papers, blogs, wikis, books, courses, and videos with user behaviors like reading, writing, annotating, and discussing, RAx uncovers new insights and opportunities in the academic realm. Our team consists of passionate researchers, engineers, and designers who share a common goal of revolutionizing research-intensive project workflows. We are committed to reducing cognitive load and facilitating the conversion of information into knowledge. The engineering team is dedicated to creating a scalable platform that manages vast amounts of data, implements AI processing, and caters to users worldwide. We firmly believe that research plays a crucial role in shaping a better world and strive to make the research process accessible and enjoyable. As an SDE-3 Fullstack at Trinka (https://trinka.ai), you will lead a team of web developers, drive end-to-end project development, and collaborate with key stakeholders such as the Engineering Manager, Principal Engineer, and Technical Project Manager. Your responsibilities will include hands-on coding, team leadership, hiring, training, and ensuring project delivery. We are looking for an SDE-3 Fullstack with at least 5 years of enterprise frontend-full-stack web experience, focusing on the AngularJS-Java-AWS stack. Ideal candidates will possess excellent research skills, advocate for comprehensive testing practices, demonstrate strong software design patterns, and exhibit expertise in optimizing scalable solutions. Additionally, experience with AWS technologies, database management, frontend development, and collaboration within a team-oriented environment is highly valued. If you meet the above requirements and are enthusiastic about contributing to a dynamic and innovative team, we invite you to join us in our mission to simplify and revolutionize research-intensive projects. Visit our websites at: https://www.trinka.ai/, https://raxter.io/, https://www.crimsoni.com/,
Posted 1 week ago
7.0 - 12.0 years
0 Lacs
karnataka
On-site
As a Senior Data Engineer with 7-12 years of experience, you will be based in Bangalore at Eco world with a hybrid work from office setup. Immediate joiners are preferred for this role. Your primary responsibility will be to work on processing large-scale process data for real-time analytics and ML model consumption. You should possess strong analytical skills to assess, engineer, and optimize data pipelines effectively. In terms of technical skills, you should be proficient in Advanced Python and PySpark. Experience working with cloud platforms such as AWS and Azure is required. Familiarity with frameworks like Lambda, Django, and Express will be beneficial. Knowledge of databases including PostgreSQL, MongoDB, and Elasticsearch is essential for this role. Your day-to-day tasks will include designing, developing, conducting automated unit testing using pyTest, and packaging Python applications. Candidates with experience in the industrial automation domain will be preferred. Proficiency in Agile methodology and the DevOps toolset (Git/Bitbucket, GitHub Copilot) is a plus. We are looking for individuals with strong problem-solving abilities, excellent communication skills, and the capability to work both independently and collaboratively within a team.,
Posted 1 week ago
5.0 - 9.0 years
0 Lacs
pune, maharashtra
On-site
You will have the opportunity to work in a dynamic environment where innovation and teamwork are key to supporting exciting missions around the world. The Qualys SaaS platform is database-centric, relying on technologies such as Oracle, Elasticsearch, Cassandra, Kafka, Redis, and Ceph to deliver uninterrupted service to customers globally. We are seeking an individual who is proficient in two or more of these technologies and is eager to learn new technologies while automating day-to-day tasks. As a Database Reliability Engineer (DBRE), your primary responsibility will be to ensure the smooth operation of production systems across various worldwide platforms. Collaboration with Development/Engineering, SRE, Infra, Networking, Support teams, and customers worldwide will be essential to provide 24x7 support for Qualys production applications and enhance service through database optimizations. This role will report to the Manager DBRE. Key Responsibilities: - Installation, configuration, and performance tuning of Oracle, Elasticsearch, Cassandra, Kafka, Redis, and Ceph in a multi-DC environment. - Identification of bottlenecks and performance tuning to maintain database integrity and security. - Monitoring performance and managing configuration parameters for fast query responses. - Installation, testing, and patching of new and existing databases. - Ensuring the proper functioning of storage, archiving, backup, and recovery procedures. - Understanding business applications to recommend and implement necessary database or architecture changes. Qualifications: - Minimum 5 years of experience managing SQL & No-SQL databases. - Extensive knowledge of diagnostic, monitoring, and troubleshooting tools to improve database performance. - Understanding of database, business applications, and operating system interrelationships. - Familiarity with backup and recovery scenarios and real-time data synchronization. - Strong problem-solving and analytical skills with the ability to collaborate across teams. - Experience working in a mid to large-size production environment. - Working knowledge of ansible, Jenkins, git, and Grafana is a plus.,
Posted 1 week ago
3.0 - 7.0 years
0 Lacs
pune, maharashtra
On-site
As a Full-stack Software Engineer at Autodesk's Fusion Operations team, you will be responsible for leading the design, implementation, testing, and maintenance of applications that provide a Device Independent experience. You will collaborate with cross-functional teams to ensure project goals and timelines are aligned. Utilizing your expertise in object-oriented programming principles, Java, and frontend development technologies, you will deliver robust, performant, and maintainable commercial applications. Your role will involve leveraging cloud technologies like AWS services for scalable solutions and participating in an on-call rotation to support production systems. You will also be involved in developing and maintaining automated tests to increase overall code coverage. Your familiarity with Agile methodologies, Scrum framework, and proficiency in Java frameworks such as Play or Spring will be advantageous in this position. Minimum qualifications include a Bachelor's degree in computer science or related field, along with 3+ years of industry experience. Proficiency in Java, JavaScript/HTML/CSS, and MySQL databases is required. Strong problem-solving skills, adaptability to changing priorities, and excellent communication skills in English are essential for success in this role. Preferred qualifications include experience with frontend frameworks like Vue.js or React, Elasticsearch, test automation tools, and CI/CD tools. A basic understanding of event-driven architecture principles and familiarity with CAD concepts related to Inventor, AutoCAD, Factory Design Utilities would be beneficial. Join Autodesk, where amazing things are created every day using innovative software. Become a part of a culture that guides the way we work, connect with customers, and show up in the world. Shape a better world designed and made for all by joining us in our mission to turn ideas into reality.,
Posted 1 week ago
0 years
0 Lacs
Hyderabad, Telangana, India
Remote
Company Description It all started in sunny San Diego, California in 2004 when a visionary engineer, Fred Luddy, saw the potential to transform how we work. Fast forward to today — ServiceNow stands as a global market leader, bringing innovative AI-enhanced technology to over 8,100 customers, including 85% of the Fortune 500®. Our intelligent cloud-based platform seamlessly connects people, systems, and processes to empower organizations to find smarter, faster, and better ways to work. But this is just the beginning of our journey. Join us as we pursue our purpose to make the world work better for everyone. Job Description About Digital Technology: We’re not yesterday’s IT department, we're Digital Technology. The world around us keeps changing and so do we. We’re redefining what it means to be IT with a mindset centered on transformation, experience, AI-driven automation, innovation, and growth. We’re all about delivering delightful, secure customer and employee experiences that accelerate ServiceNow’s journey to become the defining enterprise software company of the 21st century. And we love co-creating, using, and highlighting our own products to do it. Ultimately, we strive to make the world work better for our employees and customers—when you work in ServiceNow Digital Technology, you work for them. What You'll Do Lead and grow a high-performing team of software engineers focused on enterprise search platforms. Provide technical leadership on the implementation and optimization of search experiences, especially those powered by ServiceNow’s Search framework and other popular search technologies (e.g., ElasticSearch, Solr, Azure Cognitive Search, Amazon Kendra, etc.). Drive the design and delivery of search capabilities that improve discoverability, relevance, and performance across enterprise applications. Partner with product managers, business stakeholders, and cross-functional teams to align engineering delivery with business goals. Establish engineering best practices, ensure code quality, and promote a culture of continuous improvement. Define and track key performance indicators (KPIs) for search quality, reliability, and user engagement. Mentor engineers and foster career growth through feedback, coaching, and development planning. Champion a data-driven and user-first approach to evolving enterprise search solution Qualifications To be successful in this role you have: Experience integrating AI/ML-powered search experiences, semantic search, or large language model–based retrieval systems. Experience with search observability, telemetry, and relevance tuning. Strong understanding of access control, personalization, and multilingual search within enterprise systems. Hands-on experience in implementing and optimizing ServiceNow Search capabilities (AI Search, Zing, Search Sources, etc.). Practical knowledge of one or more enterprise search platforms such as ElasticSearch, Solr, Coveo, Azure Cognitive Search, or similar. Proven ability to lead technical discussions, evaluate trade-offs, and make informed architectural decisions in large-scale systems. Experience managing multiple business partners and internal stakeholders, translating their needs into actionable engineering plans. Strong communication skills, with the ability to collaborate across product, design, and business functions. Track record of delivering enterprise-level search experiences with measurable business impact. Exposure to ServiceNow platform development beyond search (e.g., ITSM, ITOM, custom apps) JV19 Additional Information Work Personas We approach our distributed world of work with flexibility and trust. Work personas (flexible, remote, or required in office) are categories that are assigned to ServiceNow employees depending on the nature of their work and their assigned work location. Learn more here. To determine eligibility for a work persona, ServiceNow may confirm the distance between your primary residence and the closest ServiceNow office using a third-party service. Equal Opportunity Employer ServiceNow is an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to race, color, creed, religion, sex, sexual orientation, national origin or nationality, ancestry, age, disability, gender identity or expression, marital status, veteran status, or any other category protected by law. In addition, all qualified applicants with arrest or conviction records will be considered for employment in accordance with legal requirements. Accommodations We strive to create an accessible and inclusive experience for all candidates. If you require a reasonable accommodation to complete any part of the application process, or are unable to use this online application and need an alternative method to apply, please contact globaltalentss@servicenow.com for assistance. Export Control Regulations For positions requiring access to controlled technology subject to export control regulations, including the U.S. Export Administration Regulations (EAR), ServiceNow may be required to obtain export control approval from government authorities for certain individuals. All employment is contingent upon ServiceNow obtaining any export license or other approval that may be required by relevant export control authorities. From Fortune. ©2025 Fortune Media IP Limited. All rights reserved. Used under license.
Posted 1 week ago
0 years
0 Lacs
Gurgaon, Haryana, India
On-site
Company Description Louis Dreyfus Company is a leading merchant and processor of agricultural goods. Our activities span the entire value chain from farm to fork, across a broad range of business lines, we leverage our global reach and extensive asset network to serve our customers and consumers around the world. Structured as a matrix organization of six geographical regions and ten platforms, Louis Dreyfus Company is active in over 100 countries and employs approximately 18,000 people globally. Job Description Work with operations SMEs & stake holders to fully and clearly document AS-IS business processes to the granular level (key stroke level) of detail require for UiPath Robotic process automation Act as central point for business owners, developers, SME, BPO, Infra & Business App team to ensure accurate & complete requirements gathering & adequate IT security access for BoTs & Developers Work with different LDC Local & Global IT teams (Infra/Applications/RPA Global) on IT access requirements & its set-ups for BoT, ensure IT infra & business apps stability with prior knowledge on infra changes or new releases. Support complete project life cycle including Analysis, documentation(PDD, SDD), User manual creation training and Post implementation support. Provide continuous formal updates to project stakeholders for clear visibility on progress, risk & delays on projects' timely completion, action plan with clear roles & responsivity within a defined timeline Lead project full life cycle covering process analysis, build up PDD, SDD, Process flows, expected technical & operations exceptions & risk, mitigation plans with clear roles & responsibility for business & RPA teams Work with RPA developers to create a final to-be state RPA solution including process flows using MS Visio app Manage UAT, identifying all process scenarios & expected outcomes resulting to a business Go Live approval Manage Process KPIs, centers Dashboards using Power BI & ElasticSearch Kibana tools Maintain RPA pipeline on AH, process selection per FFA (Fit/Feasible/Attractive), Project charter & status monitoring with stakeholders with precise RACI Maintain BoT users inventory, its allocation to process, optimum license utilization with effective vs desired BoT runs Business engagement to improve RPA KPI post Production deployment, Share precise reasons impacting RPA performance : high exceptions, No/Low usage by business Experience Graduate or post-graduate perferably with IT background Minimum 4 yrs of relevant experience in Robotic Process Automation (RPA) as a Business Analyst with preferably working knowledge on RPA tools (UiPath & Abbyy OCR) Analyze and question existing processes to ensure accurate understanding including business & system exceptions Good written skills with the ability to produce clear, detailed and accurate PDD/SDD. Experience on Power BI & MS Visio flow diagrams are mandatory Good inter-personal communication and presentation skills, communicating with subject matter experts, business process owners, and the development team to ensure seamless automation. Ability to work as a team member and contribute to the overall success of a team as well as work independently and handle multiple tasks. Resilience and adaptability to unforeseen work demands. Additional Information Additional Information for the job Diversity & Inclusion LDC is driven by a set of shared values and high ethical standards, with diversity and inclusion being part of our DNA. LDC is an equal opportunity employer committed to providing a working environment that embraces and values diversity, equity and inclusion. LDC encourages diversity, supports local communities and environmental initiatives. We encourage people of all backgrounds to apply. Sustainability Sustainable value is at the heart of our purpose as a company. We are passionate about creating fair and sustainable value, both for our business and for other value chain stakeholders: our people, our business partners, the communities we touch and the environment around us What We Offer We provide a dynamic and stimulating international environment, which will stretch and develop your abilities and channel your skills and expertise with outstanding career development opportunities in one of the largest and most solid private companies in the world. We offer A workplace culture that embraces diversity and inclusivity Opportunities for Professional Growth and Development Employee Recognition Program Employee Wellness Programs - Confidential access to certified counselors for employee and eligible family members, along with monthly wellness awareness sessions. Certified Great Place to Work
Posted 1 week ago
3.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Job Description We have an exciting and rewarding opportunity for you to take your software engineering career to the next level. As a Software Engineer III at JPMorgan Chase within the Consumer and community banking - Data technology, you serve as a seasoned member of an agile team to design and deliver trusted market-leading technology products in a secure, stable, and scalable way. You are responsible for carrying out critical technology solutions across multiple technical areas within various business functions in support of the firm’s business objectives. Job Responsibilities Executes software solutions, design, development, and technical troubleshooting with ability to think beyond routine or conventional approaches to build solutions or break down technical problems Creates secure and high-quality production code and maintains algorithms that run synchronously with appropriate systems Produces architecture and design artifacts for complex applications while being accountable for ensuring design constraints are met by software code development Gathers, analyzes, synthesizes, and develops visualizations and reporting from large, diverse data sets in service of continuous improvement of software applications and systems Proactively identifies hidden problems and patterns in data and uses these insights to drive improvements to coding hygiene and system architecture Contributes to software engineering communities of practice and events that explore new and emerging technologies Adds to team culture of diversity, opportunity, inclusion, and respect Required Qualifications, Capabilities, And Skills Formal training or certification on software engineering concepts and 3+ years applied experience Hands-on practical experience in system design, application development, testing, and operational stability Familiarity with modern front-end technologies Good understanding of search capabilities like ElasticSearch, OpenSearch or GraphQL. Strong subject matter expertise in Python development. Should be able to build frameworks leveraging design patterns Solid understanding of agile methodologies such as CI/CD, Application Resiliency, and Security, AWS and Terraform Demonstrated knowledge of software applications and technical processes within a technical discipline Good understanding of Data Catalog or Mata Data Management concept Preferred Qualifications, Capabilities, And Skills Familiarity with modern front-end technologies Exposure to cloud technologies
Posted 1 week ago
2.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Experience : 2.00 + years Salary : Confidential (based on experience) Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Office (Gurugram) Placement Type : Full time Permanent Position (*Note: This is a requirement for one of Uplers' client - Trademo) What do you need for this opportunity? Must have skills required: Python, Django, MongoDB, PostgreSQL Trademo is Looking for: About Trademo Trademo is a Global Supply Chain Intelligence SaaS Company, headquartered in Palo-Alto, US. Trademo collects public and private data on global trade transactions, sanctioned parties, trade tariffs, ESG and other events using its proprietary algorithms.Trademo analyzes and performs advanced data processing on billions of data points (50Tb+) using technologies like Graph Databases, Vector Databases, ElasticSearch, MongoDB, NLP and Machine Learning (LLMs) to build end-to-end visibility on Global Supply Chains. Trademo’s vision is to build a single truth on global supply chains to different stakeholders in global supply chains - discover new commerce opportunities, ensure compliance with trade regulations, and automation for border security. Trademo stands out as one of the rarest Indian SaaS startups to secure 12.5 mn in seed funding. Founded by Shalabh Singhal, who is a third-time tech entrepreneur and an alumni of IIT BHU, CFA Institute USA, and Stanford GSB SEED. Our Trademo is backed by a remarkable team of leaders and entrepreneurs like Amit Singhal (Former Head of Search at Google), Sridhar Ramaswamy (CEO, Snowflake), Neeraj Arora (MD, General Catalyst & Former CBO, Whatsapp Group). —---------------------------------------------------------------------------------------- Role: SDE 2 - Backend Website: www.trademo.com Location: Onsite - Gurgaon What will you be doing here? Design, implement, and maintain scalable features across the stack using Django/Python (backend). Build, deploy, and monitor services on Azure, GCP, and IBM Cloud, with an emphasis on scalability, performance, and security. Design data models and write efficient queries using PostgreSQL, MongoDB, and SQL, ensure data integrity and performance. Build and maintain RESTful APIs and internal services to support platform integrations and data workflows. Contribute to CI/CD pipelines, infrastructure-as-code, and monitoring solutions for seamless deployment and observability. Debug complex issues and optimize backend logic for better user experience and system reliability. Collaborate with teams and share knowledge via tech talks and promote tech and engineering best practices within the team. Requirement B-Tech/M-Tech in Computer Science from Tier 1/2 Colleges. Basic understanding of working within Cloud infrastructure and Cloud Native Apps (AWS, Azure, etc.). 3+ years of hands-on experience in software engineering, preferably in product-based or SaaS companies. Deep experience with Python and Django for backend development. Solid understanding of SQL, PostgreSQL, and MongoDB including query optimization and schema design. Sound understanding of RESTful APIs, microservices, containerization (Docker), and version control (Git). Familiarity with CI/CD practices, testing frameworks, and agile methodologies. Desired Profile: A hard-working, humble disposition. Desire to make a strong impact on the lives of millions through your work. Capacity to communicate well with stakeholders as well as team members and be an effective interface between the Engineering and Product/Business team. A quick thinker who can adapt to a fast-paced startup environment and work with minimum supervision What we offer: At Trademo, we want our employees to be comfortable with their benefits so they focus on doing the work they love. Parental leave - Maternity and Paternity Health Insurance Flexible Time Offs Stock Options How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!
Posted 1 week ago
0 years
0 Lacs
Pune, Maharashtra, India
On-site
Job Description Some careers shine brighter than others. If you’re looking for a career that will help you stand out, join HSBC and fulfil your potential. Whether you want a career that could take you to the top, or simply take you in an exciting new direction, HSBC offers opportunities, support and rewards that will take you further. HSBC is one of the largest banking and financial services organisations in the world, with operations in 64 countries and territories. We aim to be where the growth is, enabling businesses to thrive and economies to prosper, and, ultimately, helping people to fulfil their hopes and realise their ambitions. We are currently seeking an experienced professional to join our team in the role of Consultant Specialist In this role, you will: Candidate should be able to identify and resolve any technical issues arising on production as a part of production support. Contribute to all stages(Requirement analyzing, Impact analysis, development, testing, deployments) of software development lifecycle. Requirements To be successful in this role, you should meet the following requirements: Candidate should have below skill and knowledge. Design, implement and maintain JAVA 8 and above version-based applications. Candidate should have hands-on experience in Java 8 and above version.And strong experience on React. Candidate should have hands-on experience Spring Data JPA and should have detailed understanding of Hibernate concepts. Candidate should have good exposer to Java Springboot Microservices based environment. Candidate should be aware (ELK) Elasticsearch Kibana & Logstash, Splunk or any other logging tool. Should be able to write scenario-based Junit test cases in details. Should have good hands-on experience on writing and fine-tuning DB queries. Candidate should be able to identify and resolve any technical issues arising on production as a part of production support. Contribute to all stages(Requirement analyzing, Impact analysis, development, testing, deployments) of software development lifecycle. Candidates have below knowledge or exposer will be added advantage: Good knowledge or experience on Testing Automation Framework Selenium. Any Java based Workflow tool like Activiti, Camunda You’ll achieve more when you join HSBC. www.hsbc.com/careers HSBC is committed to building a culture where all employees are valued, respected and opinions count. We take pride in providing a workplace that fosters continuous professional development, flexible working and opportunities to grow within an inclusive and diverse environment. Personal data held by the Bank relating to employment applications will be used in accordance with our Privacy Statement, which is available on our website. Issued by – HSBC Software Development India
Posted 1 week ago
4.0 years
0 Lacs
India
Remote
Senior L2 SOC Analyst with Deep hands on Elastic monitering 📍 Location: Full Time- Remote 📅 Start Date: ASAP 🕒 Employment Type: Full-Time Onsite 💼 Experience: Minimum 4 Years in SOC / Cybersecurity (MSSP Experience Preferred) 💰 Salary: Based on technical expertise and skillset About the Role IT Butler e-Services is seeking a highly skilled L2 SOC Analyst with strong hands-on experience in Elastic SIEM to join our growing cybersecurity operations team. This role is ideal for professionals who are passionate about security monitoring, incident response, and threat detection using the Elastic Stack (ELK). Key Responsibilities Monitor and analyze security events using Elastic SIEM , alongside firewalls, IDS/IPS, EDR, and other telemetry sources. Triage, investigate, and respond to complex security incidents and escalations from L1 analysts. Lead root cause analysis and develop mitigation strategies to prevent future incidents. Drive proactive threat hunting activities within the Elastic environment. Collaborate with threat intel and engineering teams to optimize detection rules and build advanced dashboards . Develop and improve incident response playbooks and procedures. Provide mentorship and technical guidance to L1 analysts. Ensure incidents are properly logged, tracked, and resolved as per defined SLAs. Requirements Bachelor’s degree in Cybersecurity, Computer Science, or equivalent experience. Minimum 4 years in a SOC environment, with 2+ years of Elastic Stack experience. In-depth understanding of security threats, attack vectors, and malware behaviors. Hands-on experience with Elastic Stack (Elasticsearch, Kibana, Logstash, Beats). Familiarity with other tools like QRadar, Sentinel, CrowdStrike, SentinelOne, and Suricata is a plus. Strong understanding of MITRE ATT&CK, threat hunting, and incident response. Preferred certifications: GCIA, GCIH, CEH, CySA+, Elastic Certified Analyst, or equivalent. Excellent communication, reporting, and analytical skills. What We Offer Competitive salary based on expertise Performance-based incentives Exposure to large-scale enterprise environments Certification and learning support Opportunities for growth into senior or specialized roles Collaborative, global security team culture. Ready to Level Up Your SOC Career? 📧 Apply now: Send your resume to haseeb.r @itbutler.sa 📌 Subject line: L2 SOC Analyst Application – [Your Name]
Posted 1 week ago
5.0 years
0 Lacs
Hyderābād
On-site
DESCRIPTION The AOP (Analytics Operations and Programs) team is responsible for creating core analytics, insight generation and science capabilities for ROW Ops. We develop scalable analytics applications, AI/ML products and research models to optimize operation processes. You will work with Product Managers, Data Engineers, Data Scientists, Research Scientists, Applied Scientists and Business Intelligence Engineers using rigorous quantitative approaches to ensure high quality data/science products for our customers around the world. We are looking for a Sr.Data Scientist to join our growing Science Team. As Data Scientist, you are able to use a range of science methodologies to solve challenging business problems when the solution is unclear. You will be responsible for building ML models to solve complex business problems and test them in production environment. The scope of role includes defining the charter for the project and proposing solutions which align with org's priorities and production constraints but still create impact. You will achieve this by leveraging strong leadership and communication skills, data science skills and by acquiring domain knowledge pertaining to the delivery operations systems. You will provide ML thought leadership to technical and business leaders, and possess ability to think strategically about business, product, and technical challenges. You will also be expected to contribute to the science community by participating in science reviews and publishing in internal or external ML conferences. Our team solves a broad range of problems that can be scaled across ROW (Rest of the World including countries like India, Australia, Singapore, MENA and LATAM). Here is a glimpse of the problems that this team deals with on a regular basis: Using live package and truck signals to adjust truck capacities in real-time HOTW models for Last Mile Channel Allocation Using LLMs to automate analytical processes and insight generation Ops research to optimize middle mile truck routes Working with global partner science teams to affect Reinforcement Learning based pricing models and estimating Shipments Per Route for $MM savings Deep Learning models to synthesize attributes of addresses Abuse detection models to reduce network losses Key job responsibilities 1. Use machine learning and analytical techniques to create scalable solutions for business problems Analyze and extract relevant information from large amounts of Amazon’s historical business data to help automate and optimize key processes 2. Design, develop, evaluate and deploy, innovative and highly scalable ML/OR models 3. Work closely with other science and engineering teams to drive real-time model implementations 4. Work closely with Ops/Product partners to identify problems and propose machine learning solutions 5. Establish scalable, efficient, automated processes for large scale data analyses, model development, model validation and model maintenance 6. Work proactively with engineering teams and product managers to evangelize new algorithms and drive the implementation of large-scale complex ML models in production 7. Leading projects and mentoring other scientists, engineers in the use of ML techniques BASIC QUALIFICATIONS 5+ years of data scientist experience Experience with data scripting languages (e.g. SQL, Python, R etc.) or statistical/mathematical software (e.g. R, SAS, or Matlab) Experience with statistical models e.g. multinomial logistic regression Experience in data applications using large scale distributed systems (e.g., EMR, Spark, Elasticsearch, Hadoop, Pig, and Hive) Experience working with data engineers and business intelligence engineers collaboratively Demonstrated expertise in a wide range of ML techniques PREFERRED QUALIFICATIONS Experience as a leader and mentor on a data science team Master's degree in a quantitative field such as statistics, mathematics, data science, business analytics, economics, finance, engineering, or computer science Expertise in Reinforcement Learning and Gen AI is preferred Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner.
Posted 1 week ago
0 years
0 Lacs
Kolkata, West Bengal, India
On-site
Introduction In this role, you'll work in one of our IBM Consulting Client Innovation Centers (Delivery Centers), where we deliver deep technical and industry expertise to a wide range of public and private sector clients around the world. Our delivery centers offer our clients locally based skills and technical expertise to drive innovation and adoption of new technology. Your Role And Responsibilities As an Data Engineer at IBM you will harness the power of data to unveil captivating stories and intricate patterns. You'll contribute to data gathering, storage, and both batch and real-time processing. Collaborating closely with diverse teams, you'll play an important role in deciding the most suitable data management systems and identifying the crucial data required for insightful analysis. As a Data Engineer, you'll tackle obstacles related to database integration and untangle complex, unstructured data sets. In this role, your responsibilities may include: Implementing and validating predictive models as well as creating and maintain statistical models with a focus on big data, incorporating a variety of statistical and machine learning techniques Designing and implementing various enterprise search applications such as Elasticsearch and Splunk for client requirements Work in an Agile, collaborative environment, partnering with other scientists, engineers, consultants and database administrators of all backgrounds and disciplines to bring analytical rigor and statistical methods to the challenges of predicting behaviours. Build teams or writing programs to cleanse and integrate data in an efficient and reusable manner, developing predictive or prescriptive models, and evaluating modelling results Preferred Education Master's Degree Required Technical And Professional Expertise Proof of Concept (POC) Development: Develop POCs to validate and showcase the feasibility and effectiveness of the proposed AI solutions. Help in showcasing the ability of Gen AI code assistant to refactor/rewrite and document code from one language to another Document solution architectures, design decisions, implementation details, and lessons learned. Stay up to date with the latest trends and advancements in AI, foundation models, and large language models. Evaluate emerging technologies, tools, and frameworks to assess their potential impact on solution design and implementation Preferred Technical And Professional Experience Experience and working knowledge in COBOL & JAVA would be preferred Having experience in Code generation, code matching & code translation leveraging LLM capabilities would be a Big plus Demonstrate a growth mindset to understand clients' business processes and challenges
Posted 1 week ago
3.0 years
1 - 10 Lacs
Hyderābād
On-site
JOB DESCRIPTION We have an exciting and rewarding opportunity for you to take your software engineering career to the next level. As a Software Engineer III at JPMorgan Chase within the Consumer and community banking - Data technology, you serve as a seasoned member of an agile team to design and deliver trusted market-leading technology products in a secure, stable, and scalable way. You are responsible for carrying out critical technology solutions across multiple technical areas within various business functions in support of the firm’s business objectives. Job responsibilities Executes software solutions, design, development, and technical troubleshooting with ability to think beyond routine or conventional approaches to build solutions or break down technical problems Creates secure and high-quality production code and maintains algorithms that run synchronously with appropriate systems Produces architecture and design artifacts for complex applications while being accountable for ensuring design constraints are met by software code development Gathers, analyzes, synthesizes, and develops visualizations and reporting from large, diverse data sets in service of continuous improvement of software applications and systems Proactively identifies hidden problems and patterns in data and uses these insights to drive improvements to coding hygiene and system architecture Contributes to software engineering communities of practice and events that explore new and emerging technologies Adds to team culture of diversity, opportunity, inclusion, and respect Required qualifications, capabilities, and skills Formal training or certification on software engineering concepts and 3+ years applied experience Hands-on practical experience in system design, application development, testing, and operational stability Familiarity with modern front-end technologies Good understanding of search capabilities like ElasticSearch, OpenSearch or GraphQL. Strong subject matter expertise in Python development. Should be able to build frameworks leveraging design patterns Solid understanding of agile methodologies such as CI/CD, Application Resiliency, and Security, AWS and Terraform Demonstrated knowledge of software applications and technical processes within a technical discipline Good understanding of Data Catalog or Mata Data Management concept Preferred qualifications, capabilities, and skills Familiarity with modern front-end technologies Exposure to cloud technologies ABOUT US
Posted 1 week ago
5.0 years
0 Lacs
Gurgaon
On-site
DESCRIPTION The AOP (Analytics Operations and Programs) team is responsible for creating core analytics, insight generation and science capabilities for ROW Ops. We develop scalable analytics applications, AI/ML products and research models to optimize operation processes. You will work with Product Managers, Data Engineers, Data Scientists, Research Scientists, Applied Scientists and Business Intelligence Engineers using rigorous quantitative approaches to ensure high quality data/science products for our customers around the world. We are looking for a Sr.Data Scientist to join our growing Science Team. As Data Scientist, you are able to use a range of science methodologies to solve challenging business problems when the solution is unclear. You will be responsible for building ML models to solve complex business problems and test them in production environment. The scope of role includes defining the charter for the project and proposing solutions which align with org's priorities and production constraints but still create impact. You will achieve this by leveraging strong leadership and communication skills, data science skills and by acquiring domain knowledge pertaining to the delivery operations systems. You will provide ML thought leadership to technical and business leaders, and possess ability to think strategically about business, product, and technical challenges. You will also be expected to contribute to the science community by participating in science reviews and publishing in internal or external ML conferences. Our team solves a broad range of problems that can be scaled across ROW (Rest of the World including countries like India, Australia, Singapore, MENA and LATAM). Here is a glimpse of the problems that this team deals with on a regular basis: Using live package and truck signals to adjust truck capacities in real-time HOTW models for Last Mile Channel Allocation Using LLMs to automate analytical processes and insight generation Ops research to optimize middle mile truck routes Working with global partner science teams to affect Reinforcement Learning based pricing models and estimating Shipments Per Route for $MM savings Deep Learning models to synthesize attributes of addresses Abuse detection models to reduce network losses Key job responsibilities 1. Use machine learning and analytical techniques to create scalable solutions for business problems Analyze and extract relevant information from large amounts of Amazon’s historical business data to help automate and optimize key processes 2. Design, develop, evaluate and deploy, innovative and highly scalable ML/OR models 3. Work closely with other science and engineering teams to drive real-time model implementations 4. Work closely with Ops/Product partners to identify problems and propose machine learning solutions 5. Establish scalable, efficient, automated processes for large scale data analyses, model development, model validation and model maintenance 6. Work proactively with engineering teams and product managers to evangelize new algorithms and drive the implementation of large-scale complex ML models in production 7. Leading projects and mentoring other scientists, engineers in the use of ML techniques BASIC QUALIFICATIONS 5+ years of data scientist experience Experience with data scripting languages (e.g. SQL, Python, R etc.) or statistical/mathematical software (e.g. R, SAS, or Matlab) Experience with statistical models e.g. multinomial logistic regression Experience in data applications using large scale distributed systems (e.g., EMR, Spark, Elasticsearch, Hadoop, Pig, and Hive) Experience working with data engineers and business intelligence engineers collaboratively Demonstrated expertise in a wide range of ML techniques PREFERRED QUALIFICATIONS Experience as a leader and mentor on a data science team Master's degree in a quantitative field such as statistics, mathematics, data science, business analytics, economics, finance, engineering, or computer science Expertise in Reinforcement Learning and Gen AI is preferred Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner.
Posted 1 week ago
7.0 years
2 - 5 Lacs
Ahmedabad
On-site
Position: Java developer (7+ Years) (NV70FCT RM 3499) Job Description: Programming Languages: Extensive experience with Java (preferably Java 8+) and/or Go for backend system development in a microservices architecture. Cloud & Microservices: Strong knowledge of cloud-native design and architecture, with hands-on experience using AWS, GCP, or Azurefor building scalable network management solutions. Database Technologies: Expertise in SQL databases (PostgreSQL, MySQL) and NoSQL databases (MongoDB, Cassandra, DynamoDB,Elasticsearch), including data modeling, querying, and performance optimization. Messaging Systems: Hands-on experience with event-driven architecture and messaging systems like Apache Kafka, RabbitMQ, or ActiveMQ for handling real-time communication between services. CI/CD & DevOps: Familiarity with CI/CD tools like Jenkins, GitLab CI, CircleCI, and other modern DevOps practices for automated testing, building, and deployment pipelines. Agile Methodologies: Comfortable working in Agile environments (Scrum, Kanban), with experience managing complex tasks and collaborating in cross functional teams. Performance Optimization: Strong problem-solving skills for optimizing applications for performance, scalability, and low-latency environments in high-traffic, distributed systems. API Development & Testing: Strong hands on experience in developing RESTful APIs and integration testing to ensure smooth interoperability with Wireless and Wired devices. ******************************************************************************************************************************************* Job Category: Digital_Cloud_Web Technologies Job Type: Full Time Job Location: AhmedabadChennaiGurgaonKochiNoidaPollachi Experience: 7 Years Notice period: 0-30 days
Posted 1 week ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
55803 Jobs | Dublin
Wipro
24489 Jobs | Bengaluru
Accenture in India
19138 Jobs | Dublin 2
EY
17347 Jobs | London
Uplers
12706 Jobs | Ahmedabad
IBM
11805 Jobs | Armonk
Bajaj Finserv
11514 Jobs |
Amazon
11476 Jobs | Seattle,WA
Accenture services Pvt Ltd
10903 Jobs |
Oracle
10677 Jobs | Redwood City