Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
5.0 years
5 - 10 Lacs
Gurgaon
On-site
DESCRIPTION The AOP (Analytics Operations and Programs) team is responsible for creating core analytics, insight generation and science capabilities for ROW Ops. We develop scalable analytics applications, AI/ML products and research models to optimize operation processes. You will work with Product Managers, Data Engineers, Data Scientists, Research Scientists, Applied Scientists and Business Intelligence Engineers using rigorous quantitative approaches to ensure high quality data/science products for our customers around the world. We are looking for a Sr.Data Scientist to join our growing Science Team. As Data Scientist, you are able to use a range of science methodologies to solve challenging business problems when the solution is unclear. You will be responsible for building ML models to solve complex business problems and test them in production environment. The scope of role includes defining the charter for the project and proposing solutions which align with org's priorities and production constraints but still create impact. You will achieve this by leveraging strong leadership and communication skills, data science skills and by acquiring domain knowledge pertaining to the delivery operations systems. You will provide ML thought leadership to technical and business leaders, and possess ability to think strategically about business, product, and technical challenges. You will also be expected to contribute to the science community by participating in science reviews and publishing in internal or external ML conferences. Our team solves a broad range of problems that can be scaled across ROW (Rest of the World including countries like India, Australia, Singapore, MENA and LATAM). Here is a glimpse of the problems that this team deals with on a regular basis: Using live package and truck signals to adjust truck capacities in real-time HOTW models for Last Mile Channel Allocation Using LLMs to automate analytical processes and insight generation Ops research to optimize middle mile truck routes Working with global partner science teams to affect Reinforcement Learning based pricing models and estimating Shipments Per Route for $MM savings Deep Learning models to synthesize attributes of addresses Abuse detection models to reduce network losses Key job responsibilities 1. Use machine learning and analytical techniques to create scalable solutions for business problems Analyze and extract relevant information from large amounts of Amazon’s historical business data to help automate and optimize key processes 2. Design, develop, evaluate and deploy, innovative and highly scalable ML/OR models 3. Work closely with other science and engineering teams to drive real-time model implementations 4. Work closely with Ops/Product partners to identify problems and propose machine learning solutions 5. Establish scalable, efficient, automated processes for large scale data analyses, model development, model validation and model maintenance 6. Work proactively with engineering teams and product managers to evangelize new algorithms and drive the implementation of large-scale complex ML models in production 7. Leading projects and mentoring other scientists, engineers in the use of ML techniques BASIC QUALIFICATIONS 5+ years of data scientist experience Experience with data scripting languages (e.g. SQL, Python, R etc.) or statistical/mathematical software (e.g. R, SAS, or Matlab) Experience with statistical models e.g. multinomial logistic regression Experience in data applications using large scale distributed systems (e.g., EMR, Spark, Elasticsearch, Hadoop, Pig, and Hive) Experience working with data engineers and business intelligence engineers collaboratively Demonstrated expertise in a wide range of ML techniques PREFERRED QUALIFICATIONS Experience as a leader and mentor on a data science team Master's degree in a quantitative field such as statistics, mathematics, data science, business analytics, economics, finance, engineering, or computer science Expertise in Reinforcement Learning and Gen AI is preferred Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner.
Posted 1 week ago
4.0 years
0 Lacs
India
On-site
As an SDE3, you’ll work on backend-heavy (70% Backend Node.Js & 30% Frontend React.Js)full stack features that bring real-world booking experiences to life — such as workflows, payments, lead generation, form submissions, notifications, and search. You’ll collaborate with platform, booking engine, and integrations teams to ensure that our solutions are scalable, intuitive, and flexible across industries. This role is ideal for someone who thrives in product-oriented backend systems, has a strong full stack foundation, and is comfortable using AI tools to enhance their development speed and effectiveness. Requirements: 4+ years of experience in software engineering with a strong focus on backend systems and API design Deep expertise in Node.js/NestJS, TypeScript, and distributed system fundamentals Experience working with both SQL and NoSQL databases at scale Ability to work end-to-end across the stack — especially owning backend complexity and integrating with UI components Solid understanding of async workflows, queues, background jobs, webhooks, and system state transitions Familiarity with GCP services or willingness to get hands-on with them Openness to using AI tools (e.g., Cursor, Github Copilot, Claude Code) to accelerate development and improve code quality Strong debugging and system design skills; ability to make pragmatic architectural decisions Responsibilities: Design and build backend-heavy scheduling features that power industry use cases like meetings, rentals, and services Build scalable APIs and business logic using Node.js/NestJS, with Firestore, PostgreSQL, Elasticsearch, and Redis as data layers Leverage GCP tools like Cloud Tasks, Cloud Scheduler, Pub/Sub, and Cloud Functions to build reliable, event-driven infrastructure Contribute to the full development lifecycle — from designing APIs and modeling data to testing, deploying, and maintaining systems Integrate seamlessly with App and Integrations teams to ensure cohesive cross-squad delivery. Work on features like form submissions, appointment workflows, notifications, payments, notes/tasks, and intelligent search capabilities Build modular systems that adapt to diverse customer workflows, while maintaining strong performance and reliability Bonus Points: Experience with appointment systems, lead funnels, workflow engines, or search infrastructure Background in building domain-specific features for industries like healthcare, education, fitness, or local services Experience working with time-based systems, recurrence logic, or custom state machines
Posted 1 week ago
5.0 years
0 Lacs
India
Remote
Hi All, We are currently hiring for the position of Senior Java Software Engineer to join our Development Team at Izen Labs, Chennai. The ideal candidate will have a strong background in Java development, with experience in Spring Boot, REST APIs, Elastic Search, Redis, Oracle databases, and modern development tools. You will be responsible for building and maintaining scalable, high-performance backend systems, collaborating with cross-functional teams, and contributing to the overall software architecture and quality standards. If you are interested and meet the qualifications below, please send your updated resume to sathish.kumar@izenlabs.org Company Name : Izen Labs Job Title: Senior Java Software Engineer Location: Chennai Technical Skills & Experience5+ years of hands-on software development experience. Programming Languages: Java: 5+ years HTML, JavaScript: 3+ years Database: Oracle, SQL, JDBC – 5+ years Tools & Environment: Git, Maven, Ant, Eclipse/IntelliJ, JUnit, Docker – 3+ years Frameworks/Technologies: Spring Framework / Spring Boot / REST – 3+ years Spring Data, Hibernate, JPA – 2+ years WebLogic or Tomcat – 3+ years Redis, Elasticsearch, or other NoSQL databases – 2+ years Essential Functions and Tasks Consistently writes and documents high-quality code to solve features of moderate complexity and projects with multiple dependencies. Writes tests to verify the functionality and stability of code; establishes monitoring and alerting systems to ensure code reliability; contributes to defining standards in testing, security, monitoring, and alerting systems. Participates in on-call rotations and stakeholder inquiries, follows best practices to troubleshoot and manage production incidents independently. Solicits and responds to code review feedback to improve code and obtain code approval; provides feedback on code and design to other engineers. Identifies and reduces code level tech debt (e.g. code duplication, low test coverage). Proactively improves the performance and efficiency of our own software and systems. Exercises good judgment on engineering tradeoffs for a feature. Applies knowledge of software design patterns or best practices and evaluates trade-offs to contribute to high quality design and artifacts (e.g. ERD, design notes) that address the business requirements. Leverages relevant existing solutions and identifies opportunities to improve existing solutions. Works with stakeholders to understand and translate customer problems or business requirements into system design. Collaborates with peers to identify opportunities for operational improvements within the immediate team. Regards Sathish 9710334494
Posted 1 week ago
8.0 years
0 Lacs
Mohali district, India
On-site
Job Title: Senior DevOps Engineer – Travel Domain Location: Mohali Employment Type: Full-time Experience: 8+ years (minimum 2 years in travel domain preferred) Key Responsibilities: ● Architect, deploy, and manage highly available AWS infrastructure using services like EC2, ECS/EKS, Lambda, S3, RDS, DynamoDB, VPC, CloudFront, CloudFormation/Terraform, and more. ● Manage and integrate Active Directory and AWS Directory Service for centralized user authentication and role-based access controls. ● Design and implement network security architectures, including firewall rules, security groups, AWS Network Firewall, and WAF. ● Build and maintain centralized logging, monitoring, and alerting solutions using ELK Stack (Elasticsearch, Logstash, Kibana), Amazon OpenSearch, CloudWatch, or similar tools. ● Implement and maintain CI/CD pipelines (Jenkins, GitLab CI, AWS CodePipeline) for automated deployments and infrastructure updates. ● Conduct security assessments, vulnerability management, and implement controls aligned with PCI-DSS, CIS benchmarks, and AWS security best practices. ● Configure encryption (at rest and in transit), manage IAM policies, roles, MFA, KMS, and secrets management. ● Perform firewall rule management, configuration, and auditing in cloud and hybrid environments. ● Automate routine infrastructure tasks using scripts (Python, Bash, Go) and configuration management tools (Ansible, Chef, or Puppet). ● Design and maintain disaster recovery plans, backups, and cross-region failover strategies. Good to have: ● AWS Certifications: DevOps Engineer, Solutions Architect, Security Specialty. ● Experience participating in PCI-DSS audits or implementing PCI controls. ● Familiarity with zero-trust security architectures and microsegmentation. ● Hands-on with cloud cost management tools (AWS Cost Explorer, CloudHealth, etc.). ● Experience in travel domain projects: OBT, GDS integrations, transportation APIs.
Posted 2 weeks ago
0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Responsibilities: 1. Architect and develop scalable AI applications focused on indexing, retrieval systems, and distributed data processing. 2. Collaborate closely with framework engineering, data science, and full-stack teams to deliver an integrated developer experience for building next-generation context-aware applications (i.e., Retrieval-Augmented Generation (RAG)). 3. Design, build, and maintain scalable infrastructure for high-performance indexing, search engines, and vector databases (e.g., Pinecone, Weaviate, FAISS). 4. Implement and optimize large-scale ETL pipelines, ensuring efficient data ingestion, transformation, and indexing workflows. 5. Lead the development of end-to-end indexing pipelines, from data ingestion to API delivery, supporting millions of data points. 6. Deploy and manage containerized services (Docker, Kubernetes) on cloud platforms (AWS, Azure, GCP) via infrastructure-as-code (e.g., Terraform, Pulumi). 7. Collaborate on building and enhancing user-facing APIs that provide developers with advanced data retrieval capabilities. 8. Focus on creating high-performance systems that scale effortlessly, ensuring optimal performance in production environments with massive datasets. 9. Stay updated on the latest advancements in LLMs, indexing techniques, and cloud technologies to integrate them into cutting-edge applications. 10. Drive ML and AI best practices across the organization to ensure scalable, maintainable, and secure AI infrastructure. Qualifications: Educational Background: Bachelor's or Master’s degree in Computer Science, Data Science, Artificial Intelligence, Machine Learning, or a related field. PhD preferred. Certifications in Cloud Computing (AWS, Azure, GCP) and ML technologies are a plus. Technical Skills: 1. Expertise in Python and related frameworks (Pydantic, FastAPI, Poetry, etc.) for building scalable AI/ML solutions. 2. Proven experience with indexing technologies: Building, managing, and optimizing vector databases (Pinecone, FAISS, Weaviate) and search engines (Elasticsearch, OpenSearch). 3. Machine Learning/AI Development: Hands-on experience with ML frameworks (e.g., PyTorch, TensorFlow) and fine-tuning LLMs for retrieval-based tasks. 4. Cloud Services & Infrastructure: Deep expertise in architecting and deploying scalable, containerized AI/ML services on cloud platforms using Docker, Kubernetes, and infrastructure-as-code tools like Terraform or Pulumi. 5. Data Engineering: Strong understanding of ETL pipelines, distributed data processing (e.g., Apache Spark, Dask), and data orchestration frameworks (e.g., Apache Airflow, Prefect). 6. APIs Development: Skilled in designing and building RESTful APIs with a focus on user-facing services and seamless integration for developers. 7. Full Stack Engineering: Knowledge of front-end/back-end interactions and how AI models interact with user interfaces. 8. DevOps & MLOps: Experience with CI/CD pipelines, version control (Git), model monitoring, and logging in production environments. Experience with LLMOps tools (Langsmith, MLflow) is a plus. 9. Data Storage: Experience with SQL and NoSQL databases, distributed storage systems, and cloud-native data storage solutions (S3, Google Cloud Storage).
Posted 2 weeks ago
5.0 years
0 Lacs
Trivandrum, Kerala, India
On-site
Job Description About the Role Oracle Health Data Intelligence is looking for a talented Software Engineer to help us build the next-generation Data Platform powering intelligent AI agents at scale. This is a foundational role on our Data Platform team, where you’ll work with experienced engineers to design, build, and optimise systems that ingest, transform, and serve data across our workflows. If you’re passionate about building resilient, cloud-native data infrastructure that fuels cutting-edge AI and transforms the Healthcare for the world, we want to hear from you. What You’ll Do Design and implement robust, scalable, and low-latency data pipelines for training, evaluation, and serving of AI agents. Build cloud-native data infrastructure (streaming + batch) on OCI public cloud Develop and optimise semantic indexing and vector search infrastructure, such as Oracle 23AI Contribute to the architecture and evolution of our data lakehouse, feature store, and metadata systems. Optimise data workflows for cost, speed, and reliability in a distributed environment. Participate in code reviews, system design discussions, and technical planning. What We’re Looking For 3–5+ years of professional experience in software engineering or data engineering roles. Proficiency in Java, Go, Python, or Scala with solid software engineering and design fundamentals. Experience building scalable data pipelines using tools like Apache Spark, Crunch, Beam, Flink, Hands-on experience with cloud-native data services (e.g. Oracle BDS, BigQuery, S3, Redshift, Snowflake, Databricks, etc.). Strong understanding of data modelling, data governance, and pipeline observability best practices. Experience working with at least one public cloud provider (OCI, AWS, GCP, or Azure). Knowledge of real-time data processing and event-driven architectures. Experience with containerization and orchestration (Docker, Kubernetes, etc.). Bonus Points Experience with semantic indexing and vector search systems (e.g., Oracle 23Ai, Elasticsearch w/ dense vector support). Health domain expertise Why Join Us Be a part of a mission-driven team building infrastructure that directly powers AI breakthroughs. Work in a fast-paced, collaborative environment with a culture of ownership and impact. Responsibilities As a member of the software engineering division, you will assist in defining and developing software for tasks associated with the developing, debugging or designing of software applications or operating systems. Provide technical leadership to other software developers. Specify, design and implement modest changes to existing software architecture to meet changing needs. Qualifications Career Level - IC3 About Us As a world leader in cloud solutions, Oracle uses tomorrow’s technology to tackle today’s challenges. We’ve partnered with industry-leaders in almost every sector—and continue to thrive after 40+ years of change by operating with integrity. We know that true innovation starts when everyone is empowered to contribute. That’s why we’re committed to growing an inclusive workforce that promotes opportunities for all. Oracle careers open the door to global opportunities where work-life balance flourishes. We offer competitive benefits based on parity and consistency and support our people with flexible medical, life insurance, and retirement options. We also encourage employees to give back to their communities through our volunteer programs. We’re committed to including people with disabilities at all stages of the employment process. If you require accessibility assistance or accommodation for a disability at any point, let us know by emailing accommodation-request_mb@oracle.com or by calling +1 888 404 2494 in the United States. Oracle is an Equal Employment Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, sexual orientation, gender identity, disability and protected veterans’ status, or any other characteristic protected by law. Oracle will consider for employment qualified applicants with arrest and conviction records pursuant to applicable law.
Posted 2 weeks ago
5.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Job Description About the Role Oracle Health Data Intelligence is looking for a talented Software Engineer to help us build the next-generation Data Platform powering intelligent AI agents at scale. This is a foundational role on our Data Platform team, where you’ll work with experienced engineers to design, build, and optimise systems that ingest, transform, and serve data across our workflows. If you’re passionate about building resilient, cloud-native data infrastructure that fuels cutting-edge AI and transforms the Healthcare for the world, we want to hear from you. What You’ll Do Design and implement robust, scalable, and low-latency data pipelines for training, evaluation, and serving of AI agents. Build cloud-native data infrastructure (streaming + batch) on OCI public cloud Develop and optimise semantic indexing and vector search infrastructure, such as Oracle 23AI Contribute to the architecture and evolution of our data lakehouse, feature store, and metadata systems. Optimise data workflows for cost, speed, and reliability in a distributed environment. Participate in code reviews, system design discussions, and technical planning. What We’re Looking For 3–5+ years of professional experience in software engineering or data engineering roles. Proficiency in Java, Go, Python, or Scala with solid software engineering and design fundamentals. Experience building scalable data pipelines using tools like Apache Spark, Crunch, Beam, Flink, Hands-on experience with cloud-native data services (e.g. Oracle BDS, BigQuery, S3, Redshift, Snowflake, Databricks, etc.). Strong understanding of data modelling, data governance, and pipeline observability best practices. Experience working with at least one public cloud provider (OCI, AWS, GCP, or Azure). Knowledge of real-time data processing and event-driven architectures. Experience with containerization and orchestration (Docker, Kubernetes, etc.). Bonus Points Experience with semantic indexing and vector search systems (e.g., Oracle 23Ai, Elasticsearch w/ dense vector support). Health domain expertise Why Join Us Be a part of a mission-driven team building infrastructure that directly powers AI breakthroughs. Work in a fast-paced, collaborative environment with a culture of ownership and impact. Responsibilities As a member of the software engineering division, you will assist in defining and developing software for tasks associated with the developing, debugging or designing of software applications or operating systems. Provide technical leadership to other software developers. Specify, design and implement modest changes to existing software architecture to meet changing needs. Qualifications Career Level - IC3 About Us As a world leader in cloud solutions, Oracle uses tomorrow’s technology to tackle today’s challenges. We’ve partnered with industry-leaders in almost every sector—and continue to thrive after 40+ years of change by operating with integrity. We know that true innovation starts when everyone is empowered to contribute. That’s why we’re committed to growing an inclusive workforce that promotes opportunities for all. Oracle careers open the door to global opportunities where work-life balance flourishes. We offer competitive benefits based on parity and consistency and support our people with flexible medical, life insurance, and retirement options. We also encourage employees to give back to their communities through our volunteer programs. We’re committed to including people with disabilities at all stages of the employment process. If you require accessibility assistance or accommodation for a disability at any point, let us know by emailing accommodation-request_mb@oracle.com or by calling +1 888 404 2494 in the United States. Oracle is an Equal Employment Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, sexual orientation, gender identity, disability and protected veterans’ status, or any other characteristic protected by law. Oracle will consider for employment qualified applicants with arrest and conviction records pursuant to applicable law.
Posted 2 weeks ago
2.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Optum is a global organization that delivers care, aided by technology to help millions of people live healthier lives. The work you do with our team will directly improve health outcomes by connecting people with the care, pharmacy benefits, data and resources they need to feel their best. Here, you will find a culture guided by inclusion, talented peers, comprehensive benefits and career development opportunities. Come make an impact on the communities we serve as you help us advance health optimization on a global scale. Join us to start Caring. Connecting. Growing together. Join the EDG team as a Full Stack Software Engineer. The EDG team is responsible for Improve consumer experience by implementing an enterprise device gateway to manage device health signal acquisition, centralize consumer consent, facilitate efficient health signal distribution, and empower UHC with connected insights across the health and wellness ecosystem. The team has a strong and integrated relationship with the product team based on strong collaboration, trust, and partnership. Goals for the team are focused on creating meaningful positive impact for our customers through clear and measurable metrics analysis. Primary Responsibilities Write high-quality, fault tolerant code; normally 70% Backend and 30% Front-end (though the exact ratio will depend on your interest) Build high-scale systems, libraries, frameworks and create test plans Monitor production systems and provide on-call support Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so Required Qualifications BS in Computer Science, Engineering or a related technical role or equivalent experience 2+ years experience with JS libraries and frameworks, such as Angular, React or other 2+ years experience in Scala, Java, or other compiled language Preferred Qualifications Experience with web design Experience using RESTful APIs and asynchronous JS Experience in design and development Testing experience with Scala or Java Database and caching experience, SQL and NoSQL (Postgres, Elasticsearch, or MongoDB) Proven interest in learning Scala At UnitedHealth Group, our mission is to help people live healthier lives and make the health system work better for everyone. We believe everyone-of every race, gender, sexuality, age, location and income-deserves the opportunity to live their healthiest life. Today, however, there are still far too many barriers to good health which are disproportionately experienced by people of color, historically marginalized groups and those with lower incomes. We are committed to mitigating our impact on the environment and enabling and delivering equitable care that addresses health disparities and improves health outcomes — an enterprise priority reflected in our mission.
Posted 2 weeks ago
5.0 years
0 Lacs
Ahmedabad, Gujarat, India
On-site
Job Description About the Role Oracle Health Data Intelligence is looking for a talented Software Engineer to help us build the next-generation Data Platform powering intelligent AI agents at scale. This is a foundational role on our Data Platform team, where you’ll work with experienced engineers to design, build, and optimise systems that ingest, transform, and serve data across our workflows. If you’re passionate about building resilient, cloud-native data infrastructure that fuels cutting-edge AI and transforms the Healthcare for the world, we want to hear from you. What You’ll Do Design and implement robust, scalable, and low-latency data pipelines for training, evaluation, and serving of AI agents. Build cloud-native data infrastructure (streaming + batch) on OCI public cloud Develop and optimise semantic indexing and vector search infrastructure, such as Oracle 23AI Contribute to the architecture and evolution of our data lakehouse, feature store, and metadata systems. Optimise data workflows for cost, speed, and reliability in a distributed environment. Participate in code reviews, system design discussions, and technical planning. What We’re Looking For 3–5+ years of professional experience in software engineering or data engineering roles. Proficiency in Java, Go, Python, or Scala with solid software engineering and design fundamentals. Experience building scalable data pipelines using tools like Apache Spark, Crunch, Beam, Flink, Hands-on experience with cloud-native data services (e.g. Oracle BDS, BigQuery, S3, Redshift, Snowflake, Databricks, etc.). Strong understanding of data modelling, data governance, and pipeline observability best practices. Experience working with at least one public cloud provider (OCI, AWS, GCP, or Azure). Knowledge of real-time data processing and event-driven architectures. Experience with containerization and orchestration (Docker, Kubernetes, etc.). Bonus Points Experience with semantic indexing and vector search systems (e.g., Oracle 23Ai, Elasticsearch w/ dense vector support). Health domain expertise Why Join Us Be a part of a mission-driven team building infrastructure that directly powers AI breakthroughs. Work in a fast-paced, collaborative environment with a culture of ownership and impact. Responsibilities As a member of the software engineering division, you will assist in defining and developing software for tasks associated with the developing, debugging or designing of software applications or operating systems. Provide technical leadership to other software developers. Specify, design and implement modest changes to existing software architecture to meet changing needs. Qualifications Career Level - IC3 About Us As a world leader in cloud solutions, Oracle uses tomorrow’s technology to tackle today’s challenges. We’ve partnered with industry-leaders in almost every sector—and continue to thrive after 40+ years of change by operating with integrity. We know that true innovation starts when everyone is empowered to contribute. That’s why we’re committed to growing an inclusive workforce that promotes opportunities for all. Oracle careers open the door to global opportunities where work-life balance flourishes. We offer competitive benefits based on parity and consistency and support our people with flexible medical, life insurance, and retirement options. We also encourage employees to give back to their communities through our volunteer programs. We’re committed to including people with disabilities at all stages of the employment process. If you require accessibility assistance or accommodation for a disability at any point, let us know by emailing accommodation-request_mb@oracle.com or by calling +1 888 404 2494 in the United States. Oracle is an Equal Employment Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, sexual orientation, gender identity, disability and protected veterans’ status, or any other characteristic protected by law. Oracle will consider for employment qualified applicants with arrest and conviction records pursuant to applicable law.
Posted 2 weeks ago
7.0 - 8.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
About Company: Our client is a global technology consulting and digital solutions company that enables enterprises to reimagine business models and accelerate innovation through digital technologies. Powered by more than 84,000 entrepreneurial professionals across more than 30 countries, it caters to over 700 clients with its extensive domain and technology expertise to help drive superior competitive differentiation, customer experiences, and business outcomes. Job Title: Java Backend Location:Hyderabad (Rai Durg)/ Gurgaon (Ambience Island, DLF Phase 3) Experience: 7 to 8 Years Employment Type: Contract to Hire Work Mode: Hybrid Notice Period: Immediate Joiners Only Job Description: Bachelor's /Master’s Degree in Computer Science, Data Science or equivalent. Have excellent communication and interpersonal skills Have strong analytical skills and learning agility. Must be hands on in coding specifically using java and related technologies. Have ability to work in a collaborative work environment 9 to 12 years of experience in application development using Core Java, spring boot, restful services, JPA, SQL etc. Strong expertise and knowledge in Core Java, Multithreading, Microservices, Spring Boot, restful service, Collections and Data Structures Experience with OpenSearch (or Elasticsearch), including cluster management, indexing strategies, and performance tuning AWS, javascript frameworks like Angular, Reactor framework Proficient with software development lifecycle (SDLC) methodologies like Agile, Test- driven development.
Posted 2 weeks ago
8.0 years
0 Lacs
Hyderabad, Telangana
On-site
General information Country India State Telangana City Hyderabad Job ID 45303 Department Development Description & Requirements As a software lead, you will play a critical role in defining and driving the architectural vision of our RPA product. You will ensure technical excellence, mentor engineering teams, and collaborate across departments to deliver innovative automation solutions. This is a unique opportunity to influence the future of RPA technology and make a significant impact on the industry. RESPONSIBILITIES: Define and lead the architectural design and development of the RPA product, ensuring solutions are scalable, maintainable, and aligned with organizational strategic goals. Provide technical leadership and mentor team members on architectural best practices. Analyze and resolve complex technical challenges, including performance bottlenecks, scalability issues, and integration challenges, to ensure high system reliability and performance. Collaborate with cross-functional stakeholders, including product managers, QA, and engineering teams, to define system requirements, prioritize technical objectives, and design cohesive solutions. Provide architectural insights during sprint planning and agile processes. Establish and enforce coding standards, best practices, and guidelines across the engineering team, conducting code reviews with a focus on architecture, maintainability, and future scalability. Develop and maintain comprehensive documentation for system architecture, design decisions, and implementation details, ensuring knowledge transfer and facilitating team collaboration. Architect and oversee robust testing strategies, including automated unit, integration, and regression tests, to ensure adherence to quality standards and efficient system validation. Research and integrate emerging technologies, particularly advancements in RPA and automation, to continually enhance the product’s capabilities and technical stack. Drive innovation and implement best practices within the team. Serve as a technical mentor and advisor to engineering teams, fostering professional growth and ensuring alignment with the overall architectural vision. Ensure that the RPA product adheres to security and compliance standards by incorporating secure design principles, conducting regular security reviews, and implementing necessary safeguards to protect data integrity, confidentiality, and availability. EDUCATION & EXPERIENCE: Bachelor’s or Master’s degree in Computer Science, Software Engineering, or a related field. 8+ years of professional experience in software development. REQUIRED SKILLS: Expertise in object-oriented programming languages such as Java, C#, or similar, with a strong understanding of design patterns and principles. Deep familiarity with software development best practices, version control systems (e.g., Git), and continuous integration/continuous delivery (CI/CD) workflows. Proven experience deploying and managing infrastructure on cloud platforms such as AWS, Azure, or Google Cloud, including knowledge of containerization technologies like Docker and orchestration tools like Kubernetes. Strong proficiency in architecting, building, and optimizing RESTful APIs and microservices, with familiarity in tools like Swagger/OpenAPI and Postman for design and testing Comprehensive knowledge of SQL databases (e.g., PostgreSQL, SQLServer) with expertise in designing scalable and reliable data models, including creating detailed Entity-Relationship Diagrams (ERDs) and optimizing database schemas for performance and maintainability. Demonstrated experience in building and maintaining robust CI/CD pipelines using tools such as Jenkins or GitLab CI. Demonstrated ability to lead teams in identifying and resolving complex software and infrastructure issues using advanced troubleshooting techniques and tools. Exceptional communication and leadership skills, with the ability to guide and collaborate with cross-functional teams, bridging technical and non-technical stakeholders. Excellent written and verbal communication skills, with a focus on documenting technical designs, code, and system processes clearly and concisely. Comfortable and experienced in agile development environments, demonstrating adaptability to evolving requirements and timelines while maintaining high productivity and focus on deliverables. Familiarity with security best practices in software development, such as OWASP guidelines, secure coding principles, and implementing authentication/authorization frameworks (e.g., OAuth, SAML, JWT). Experience with microservices architecture, message brokers (e.g., RabbitMQ, Kafka), and event-driven design. Extensive experience in performance optimization and scalability, with a focus on designing high-performance systems and utilizing profiling tools and techniques to optimize both code and infrastructure for maximum efficiency. PREFERRED SKILLS: Experience with serverless architecture, including deploying and managing serverless applications using platforms such as AWS Lambda, Azure Functions, or Google Cloud Functions, to build scalable, cost-effective solutions. Experience with RPA tools or frameworks (e.g., UiPath, Automation Anywhere, Blue Prism) is a plus. Experience with Generative AI technologies, including working with frameworks like TensorFlow, PyTorch, or Hugging Face, and integrating AI/ML models into software applications. Hands-on experience with data analytics or logging tools like ELK Stack (Elasticsearch, Logstash, Kibana) or Splunk for monitoring and troubleshooting application performance About Infor Infor is a global leader in business cloud software products for companies in industry specific markets. Infor builds complete industry suites in the cloud and efficiently deploys technology that puts the user experience first, leverages data science, and integrates easily into existing systems. Over 60,000 organizations worldwide rely on Infor to help overcome market disruptions and achieve business-wide digital transformation. For more information visit www.infor.com Our Values At Infor, we strive for an environment that is founded on a business philosophy called Principle Based Management™ (PBM™) and eight Guiding Principles: integrity, stewardship & compliance, transformation, principled entrepreneurship, knowledge, humility, respect, self-actualization. Increasing diversity is important to reflect our markets, customers, partners, and communities we serve in now and in the future. We have a relentless commitment to a culture based on PBM. Informed by the principles that allow a free and open society to flourish, PBM™ prepares individuals to innovate, improve, and transform while fostering a healthy, growing organization that creates long-term value for its clients and supporters and fulfillment for its employees. Infor is an Equal Opportunity Employer. We are committed to creating a diverse and inclusive work environment. Infor does not discriminate against candidates or employees because of their sex, race, gender identity, disability, age, sexual orientation, religion, national origin, veteran status, or any other protected status under the law. If you require accommodation or assistance at any time during the application or selection processes, please submit a request by following the directions located in the FAQ section at the bottom of the infor.com/about/careers webpage.
Posted 2 weeks ago
3.0 - 7.0 years
0 Lacs
delhi
On-site
As a Software Engineer at Bain & Company, you will be responsible for delivering application modules with minimal supervision, providing guidance to associate engineers, and collaborating with an Agile/scrum software development team. Your primary focus will be on building and supporting Bain's strategic internal software systems to deliver value to global users and support key business initiatives. Your responsibilities will include working on technical delivery tasks such as developing and updating enterprise applications, analyzing user stories, preparing work estimates, writing unit test plans, participating in testing and implementation of application releases, and providing ongoing support for applications in use. Additionally, you will contribute to research efforts to evaluate new technologies and tools, share knowledge within the team, and present technical findings and recommendations. You should have expertise in frameworks like .NET & .NET Core, languages such as C# and T-SQL, web frameworks/libraries like Angular/React, JavaScript, HTML, CSS, Bootstrap, RDBMS like Microsoft SQL Server, cloud services like Microsoft Azure, unit testing tools like XUnit and Jasmine, and DevOps practices with GitActions. Familiarity with search engines, NoSQL databases, caching mechanisms, and preferred skills in Python & GenAI will be beneficial. To qualify for this role, you should hold a Bachelor's degree or equivalent, have 3-5 years of experience in developing enterprise-scale applications, possess a strong understanding of agile software development methodologies, and demonstrate excellent communication, customer service, analytic, and problem-solving skills. Your ability to work collaboratively with the team, acquire new skills, and contribute to continuous improvement efforts will be essential for success in this position.,
Posted 2 weeks ago
2.0 - 10.0 years
0 Lacs
coimbatore, tamil nadu
On-site
The Technical Lead is responsible for leading a team of engineers in the design, implementation, maintenance, and troubleshooting of Linux-based systems. This role requires a deep understanding of Linux systems, network architecture, and software development processes. You will drive innovation, ensure system stability, and lead the team in delivering high-quality infrastructure solutions that align with the organization's goals. Lead and mentor a team of Linux engineers, providing technical guidance and fostering professional growth. Manage workload distribution, ensuring that projects and tasks are completed on time and within scope. Collaborate with cross-functional teams to align IT infrastructure with organizational objectives. You will also be responsible for SLA & ITIL, Inventory Management. Architect, deploy, and manage robust Linux-based environments, including servers, networking, and storage solutions. Ensure the scalability, reliability, and security of Linux systems. Oversee the automation of system deployment and management processes using tools such as Ansible, Puppet, or Chef. Additionally, you will handle database management for MySQL, MongoDB, Elasticsearch, and Postgres. Lead efforts in monitoring, maintaining, and optimizing system performance. Proactively identify potential issues and implement solutions to prevent system outages. Resolve complex technical problems escalated from the support team. Implement and maintain security best practices for Linux systems, including patch management, firewall configuration, and access controls. Ensure compliance with relevant industry standards and regulations (e.g., HIPAA, GDPR, PCI-DSS). Develop and maintain comprehensive documentation of systems, processes, and procedures. Prepare and present regular reports on system performance, incidents, and improvement initiatives to senior management. Stay up-to-date with the latest Linux technologies, tools, and practices. Lead initiatives to improve the efficiency, reliability, and security of Linux environments. Drive innovation in infrastructure management, including the adoption of cloud technologies and containerization. Required Qualifications: - Bachelors degree in Computer Science, Information Technology, or a related field, or equivalent work experience. - 10+ years of experience in Linux system administration, with at least 2 years in a leadership or senior technical role. - Deep understanding of Linux operating systems (RHEL, CentOS, Ubuntu) and associated technologies. - Strong knowledge of networking principles, including TCP/IP, DNS, and firewalls. - Experience with automation and configuration management tools (e.g., Ansible, Puppet, Chef). - Proficiency in scripting languages (e.g., Bash, Python). - Experience with virtualization (e.g., VMware, KVM) and containerization (e.g., Docker, Kubernetes). - Familiarity with cloud platforms (e.g., AWS, Azure, GCP) and hybrid cloud environments. - Excellent problem-solving skills and the ability to work under pressure. - Strong communication and interpersonal skills. Job Types: Full-time, Permanent Benefits: - Health insurance - Provident Fund Schedule: Day shift Yearly bonus Work Location: In person,
Posted 2 weeks ago
18.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Optum is a global organization that delivers care, aided by technology to help millions of people live healthier lives. The work you do with our team will directly improve health outcomes by connecting people with the care, pharmacy benefits, data and resources they need to feel their best. Here, you will find a culture guided by inclusion, talented peers, comprehensive benefits and career development opportunities. Come make an impact on the communities we serve as you help us advance health optimization on a global scale. Join us to start Caring. Connecting. Growing together. Software engineering is the application of engineering to the design, development, implementation, testing and maintenance of software in a systematic method. The roles in this function will cover all primary development activity across all technology functions that ensure we deliver code with high quality for our applications, products and services and to understand customer needs and to develop product roadmaps. These roles include, but are not limited to analysis, design, coding, engineering, testing, debugging, standards, methods, tools analysis, documentation, research and development, maintenance, new development, operations and delivery. With every role in the company, each position has a requirement for building quality into every output. This also includes evaluating new tools, new techniques, strategies; Automation of common tasks; build of common utilities to drive organizational efficiency with a passion around technology and solutions and influence of thought and leadership on future capabilities and opportunities to apply technology in new and innovative ways. Primary Responsibilities Help define Platform roadmap for the enterprise, multi-region cloud strategy and own the services and capability delivery end to end Work very closely with various business stakeholders to drive the execution of multiple business initiatives and technologies Set short- and long-term vision, strategy, structure and direction for platform organization Partners with all Business and Product Leaders to develop new product features and upgrade existing Identity Platform product and processes; helps define product and project deliverables, budgets, schedules, and testing, launch and release plans Define Digital Identity Strategy for migration and reengineering existing products & business processes Build, develop and guide high-performing talent for this platform team. Define a long-term talent strategy cutting across domains and technology Manage web scale systems to demanding availability targets (99.999%+) Stay abreast of leading-edge technologies in the industry, evaluating emerging technologies and evangelizing their adoption Manages an Agile (Scrum) Development process in a continuous integration and deployment methodology Empower software delivery teams to rapidly deliver software through the use of automation and “everything-as-code” best practices Adapting to and remaining effective in a changing environment Hands-on approach to better understand the technical challenges faced by the Team; guide the team in technical solutions Oversee the planning & technical direction of various development tracks Drive architectural initiatives that align our business needs and technical capabilities for Identity Management solutions Represent Optum at various forums Provides leadership to and is accountable for the performance and results through multiple layers of management and senior level professional staff Impact of work is most often at the regional (e.g. multi-state) level, or is responsible for a major portion of a business segment, functional area or line of business Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so Required Qualifications B.E. / B.Tech (Computer Science) from reputed college, Master’s degree preferable and total industry experience of 18+ years 5+ years of experience leading a team of Software Engineers 4+ years of experience working in a DevOps model 2+ years of experience in managing and using more than one Public Cloud Platform (GCP, AWS, Azure, Oracle Cloud) Experience with DevOps engineering tools such as Jenkins, Terraform, etc. Experience in a fast paced, Agile, continuous integration environment Experience in Systems Monitoring, Alerting and Analytics Experience in Log aggregation reporting tools, such as: Elasticsearch, Splunk Experience working in an off-shore / on-shore model Hands-on experience in distributed application development in AWS public cloud environment following well-architected principles with the goal of 99.999% availability Cloud-Native Development experience - developing and deploying microservices to Kubernetes Solid programming experience with Java Experience leading development using modern software engineering and product development tools including Agile, Continuous Integration, Continuous Delivery, DevOps etc. Experience of developing highly resilient and scalable cloud native and cloud independent applications Experience in developing multi-tenant SaaS based applications Demonstrable experience leading international delivery and engineering teams Excellent knowledge of distributed computing Proven solid technical skills in data structures, algorithms, system design, coding best practices, build-release procedures Proven good communicator at all levels of the organization Proven extremely hands-on technically and should have a deep passion and curiosity for technology Proven exceptional communication skills and the demonstrable ability to communicate appropriately at all levels of the organization including senior technology and business leaders Proven self-driven and a strategic leader who continuously raises the bar for self and the team Preferred Qualifications Certification in a cloud platform (AWS, GCP, Azure) Experience architecting, delivering, and operating large scale, highly available system Experience in Healthcare Industry experience Experience in complex projects with division or company-wide scope Experience with information security threat modelling and risk analysis Knowledge of implementation of Technology specifications and/or RFCs Familiarity with IT standards and best practices, audit, security and compliance (ex: ITIL, ITSM, SOC2, HIPAA, HITRUST CSF) Proven success delivering products/services in a high-growth environment, exhibiting solid ability to identify and solve ambiguous customer-focused problems Proven success in hiring and developing highly effective software engineering teams in a global team environment Proven high attention to detail with proven ability to juggle multiple, competing priorities simultaneously and make things happen in a fast-paced, dynamic environment At UnitedHealth Group, our mission is to help people live healthier lives and make the health system work better for everyone. We believe everyone - of every race, gender, sexuality, age, location and income - deserves the opportunity to live their healthiest life. Today, however, there are still far too many barriers to good health which are disproportionately experienced by people of color, historically marginalized groups and those with lower incomes. We are committed to mitigating our impact on the environment and enabling and delivering equitable care that addresses health disparities and improves health outcomes - an enterprise priority reflected in our mission.
Posted 2 weeks ago
5.0 - 9.0 years
0 Lacs
pune, maharashtra
On-site
Are you passionate about inspiring change, building data-driven tools to improve software quality, and ensuring customers have the best experience If so, we have a phenomenal opportunity for you! NVIDIA is seeking a creative, and hands-on software engineer with a test-to-failure approach who is a quick learner, can understand software and hardware specifications, build reliable tests and tools in C++/C#/Python to improve quality and accelerate the delivery of NVIDIA products. As a Software Automation and Tools Engineer, you will take part in the technical design and implementation of tests for NVIDIA software products with the goal of identifying defects early in the software development lifecycle. You will also build tools that accelerate execution workflows for the organization. In this role, you can expect to develop automated end-to-end tests for NVIDIA device drivers and SDKs on the Windows platform, execute automated tests, identify and report defects, measure code coverage, analyze and drive code coverage improvements, develop applications and tools that bring data-driven insights to development and test workflows, build tools/utility/framework in Python/C/C++ to automate and optimize testing workflows in the GPU domain, write maintainable, reliable, and well-detailed code, debug issues to identify the root cause, provide peer code reviews including feedback on performance, scalability, and correctness, optimally estimate and prioritize tasks to create a realistic delivery schedule, generate and test compatibility across a range of products and interfaces, work closely with leadership to report progress by generating effective and impactful reports. You will have the opportunity to work on challenging technical and process issues. What we need to see: - B.E./B. Tech degree in Computer Science/IT/Electronics engineering with strong academics or equivalent experience - 5+ years of programming experience in Python/C/C++ with experience in applying Object-Oriented Programming concepts - Hands-on knowledge of developing Python scripts with application development concepts like dictionaries, tuples, RegEx, PIP, etc. - Good experience with using AI development tools for test plans creation, test cases development, and test case automation - Experience with testing RESTful APIs and the ability to conduct performance and load testing to ensure the application can handle high traffic and usage - Experience working with databases and storage technologies like SQL and Elasticsearch - Good understanding of OS fundamentals, PC Hardware, and troubleshooting - The ability to collaborate with multiple development teams to gain knowledge and improve test code coverage - Excellent written and verbal communication skills and excellent analytical and problem-solving skills - The ability to work with a team of engineers in a fast-paced environment Ways to stand out from the crowd: - Prior project experience with building ML and DL based applications would be a plus - Good understanding of testing fundamentals - Good problem-solving skills (solid logic to apply in isolation and regression of issues found),
Posted 2 weeks ago
6.0 years
0 Lacs
Pune, Maharashtra, India
On-site
About TripleLift We're TripleLift, an advertising platform on a mission to elevate digital advertising through beautiful creative, quality publishers, actionable data and smart targeting. Through over 1 trillion monthly ad transactions, we help publishers and platforms monetize their businesses. Our technology is where the world's leading brands find audiences across online video, connected television, display and native ads. Brand and enterprise customers choose us because of our innovative solutions, premium formats, and supportive experts dedicated to maximizing their performance. As part of the Vista Equity Partners portfolio, we are NMSDC certified, qualify for diverse spending goals and are committed to economic inclusion. Find out how TripleLift raises up the programmatic ecosystem at triplelift.com. The Role TripleLift is seeking a Senior Data Engineer to join a small, influential Data Engineering team. You will be responsible for expanding and optimizing our high-volume, low-latency data platform architecture, as well as optimizing data flow and collection for cross functional teams. The ideal candidate is an experienced data pipeline engineer and data wrangler who enjoys optimizing data systems and building them from the ground up. This role will support our software engineers, product managers, business intelligence analysts and data scientists on data initiatives, and will ensure optimal data delivery architecture is applied consistently throughout new and ongoing projects. Ideal candidates must be self-directed and comfortable supporting the data needs of multiple teams, systems and products. The right candidate will be excited by the prospect of optimizing or even re-designing our company’s data architecture to support our next generation of products and data initiatives. Responsibilities Create and maintain optimal, high-throughput data platform architecture handling 100’s of billions of daily events. Explore, refine and assemble large, complex data sets that meet functional product and business requirements. Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc. Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using Spark, EMR, Snowpark, Kafka and other big data technologies. Work with stakeholders across geo-distributed teams, including product managers, engineers and analysts to assist with data-related technical issues and support their data infrastructure needs. Digest and communicate business requirements effectively to both technical and non-technical audiences. Translate business requirements into concise technical specifications. Qualifications 6+ years of experience in a Data Engineer role Bachelors Degree, or higher, in Computer Science or related Engineering field Experience building and optimizing ‘big data’ data pipelines, architectures and data sets Expert working knowledge of Databricks/Spark and associated APIs Strong experience with object-oriented and functional scripting languages: Python, Java, Scala and associated toolchain Experience working with relational databases, SQL authoring/optimizing as well as operational familiarity with a variety of databases. Experience with AWS cloud services: EC2, EMR, RDS Experience working with NoSQL data stores such as: Elasticsearch, Apache Druid Experience with data pipeline and workflow management tools: Airflow Experience performing root cause analysis on internal and external data and processes to answer specific business questions and identify opportunities for improvement. Strong experience working with unstructured and simi-structured data formats: JSON, Parquet, Iceberg, Avro, Protobuf Expert knowledge of processes supporting data transformation, data structures, metadata, dependency and workload management. Proven experience in manipulating, processing, and extracting value from large, disparate datasets. Working knowledge of streams processing, message queuing, and highly scalable ‘big data’ data stores. Experience supporting and working with cross-functional teams in a dynamic environment. Preferred Streaming systems experience with Kafka, Spark Streaming, Kafka Streams Snowflake/Snowpark DBT Exposure to AdTech Life at TripleLift At TripleLift, we’re a team of great people who like who they work with and want to make everyone around them better. This means being positive, collaborative, and compassionate. We hustle harder than the competition and are continuously innovating. Learn more about TripleLift and our culture by visiting our LinkedIn Life page. Establishing People, Culture and Community Initiatives At TripleLift, we are committed to building a culture where people feel connected, supported, and empowered to do their best work. We invest in our people and foster a workplace that encourages curiosity, celebrates shared values, and promotes meaningful connections across teams and communities. We want to ensure the best talent of every background, viewpoint, and experience has an opportunity to be hired, belong, and develop at TripleLift. Through our People, Culture, and Community initiatives, we aim to create an environment where everyone can thrive and feel a true sense of belonging. Privacy Policy Please see our Privacy Policies on our TripleLift and 1plusX websites. TripleLift does not accept unsolicited resumes from any type of recruitment search firm. Any resume submitted in the absence of a signed agreement will become the property of TripleLift and no fee shall be due.
Posted 2 weeks ago
2.0 years
0 Lacs
Pune, Maharashtra, India
On-site
About TripleLift We're TripleLift, an advertising platform on a mission to elevate digital advertising through beautiful creative, quality publishers, actionable data and smart targeting. Through over 1 trillion monthly ad transactions, we help publishers and platforms monetize their businesses. Our technology is where the world's leading brands find audiences across online video, connected television, display and native ads. Brand and enterprise customers choose us because of our innovative solutions, premium formats, and supportive experts dedicated to maximizing their performance. As part of the Vista Equity Partners portfolio, we are NMSDC certified, qualify for diverse spending goals and are committed to economic inclusion. Find out how TripleLift raises up the programmatic ecosystem at triplelift.com. The Role TripleLift is seeking a Data Engineer II to join a small, influential Data Engineering team. You will be responsible for evolving and optimizing our high-volume, low-latency data pipeline architecture, as well as optimizing data flow and collection for cross functional teams. The ideal candidate is an experienced data pipeline builder and data wrangler who enjoys optimizing data systems and building them from the ground up. In this role, you will support our software engineers, product managers, business intelligence analysts, and data scientists on data initiatives. You will also ensure optimal data delivery architecture is applied consistently across all new and ongoing projects. Ideal candidates will be self-starters who can efficiently meet the data needs of various teams, systems, and products. The right candidate will be excited by the prospect of optimizing or even re-designing our company’s data architecture to support our next generation of products and data initiatives. Responsibilities Create and maintain optimal, high-throughput data platform architecture handling 100’s of billions of daily events. Explore, refine and assemble large, complex data sets that meet functional product and business requirements. Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc. Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using Spark, EMR, Snowpark, Kafka and other big data technologies Work with stakeholders across geo-distributed teams, including product managers, engineers and analysts to assist with data-related technical issues and support their data infrastructure needs. Digest and communicate business requirements effectively to both technical and non-technical audiences. Qualifications 2+ years of experience in a Data Engineer role Bachelors Degree, or higher, in Computer Science or related Engineering field Experience building and optimizing ‘big data’ data pipelines, architectures and data sets Strong working knowledge of Databricks/Spark and associated APIs Experience with object-oriented and functional scripting languages: Python, Java, Scala and associated toolchain Experience working with relational databases, SQL authoring/optimizing as well as operational familiarity with a variety of databases. Experience with AWS cloud services: EC2, EMR, RDS Experience working with NoSQL data stores such as: Elasticsearch, Apache Druid Experience with data pipeline and workflow management tools: Airflow Experience performing root cause analysis on internal and external data and processes to answer specific business questions and identify opportunities for improvement Strong experience working with unstructured and simi-structured data formats: JSON, Parquet, Iceberg, Avro, Protobuf Expert knowledge of processes supporting data transformation, data structures, metadata, dependency and workload management. Proven experience in manipulating, processing, and extracting value from large, disparate datasets. Working knowledge of streams processing, message queuing, and highly scalable ‘big data’ data stores. Experience supporting and working with cross-functional teams in a dynamic environment. Preferred Streaming systems experience with Kafka, Spark Streaming, Kafka Streams Snowflake/Snowpark DBT Exposure to AdTech Life at TripleLift At TripleLift, we’re a team of great people who like who they work with and want to make everyone around them better. This means being positive, collaborative, and compassionate. We hustle harder than the competition and are continuously innovating. Learn more about TripleLift and our culture by visiting our LinkedIn Life page. Establishing People, Culture and Community Initiatives At TripleLift, we are committed to building a culture where people feel connected, supported, and empowered to do their best work. We invest in our people and foster a workplace that encourages curiosity, celebrates shared values, and promotes meaningful connections across teams and communities. We want to ensure the best talent of every background, viewpoint, and experience has an opportunity to be hired, belong, and develop at TripleLift. Through our People, Culture, and Community initiatives, we aim to create an environment where everyone can thrive and feel a true sense of belonging. Privacy Policy Please see our Privacy Policies on our TripleLift and 1plusX websites. TripleLift does not accept unsolicited resumes from any type of recruitment search firm. Any resume submitted in the absence of a signed agreement will become the property of TripleLift and no fee shall be due.
Posted 2 weeks ago
2.0 - 6.0 years
0 Lacs
karnataka
On-site
As a developer, you will bring ideas to life by relentlessly improving performance, scalability, and maintainability of the development process. You will drive and adhere to software development best practices while continuously seeking to improve and maintain software. Collaborating closely with internal operations teams, you will empower them with technology solutions. Your responsibilities will include automating tasks wherever possible and configuring everything as code. Additionally, you will estimate and manage feature deliveries in a predictable manner, driving discussions to enhance products, processes, and technologies. Incremental changes to the architecture will be made after conducting impact analyses. With 1.5-4 years of experience in product development and architecture, you possess strong fundamentals in Computer Science, especially in Networking, Databases, and Operating Systems. Being organized and self-sufficient with attention to detail, you excel in both front-end and back-end development, preferably as a full-stack developer. Your understanding of MVC frameworks like Rails, Angular, Django, and React, along with familiarity with micro-service architecture and test-driven development, will be essential in this role. Proficiency in *nix systems, AWS/Kubernetes, and SQL skills, particularly with PostgreSQL, will be advantageous. Previous experience mentoring developers is a plus. Additional experience with NewRelic, Kafka, ElasticSearch, RPC, SOA, Event-driven systems, Message Buses, and designing services/applications from scratch will be highly valued in this position.,
Posted 2 weeks ago
3.0 - 7.0 years
0 Lacs
haryana
On-site
As a Software Engineer at Bain & Company, you will be responsible for delivering application modules with minimal supervision, guiding entry-level engineers, and collaborating with an Agile software development team. You will work on building and supporting Bain's strategic internal software systems, focusing on delivering value to global users and supporting key business initiatives. Your role will involve developing enterprise-scale browser-based or mobile applications using current Microsoft development languages and technologies. Your primary responsibilities and duties will include: Technical Delivery (80%): - Collaborating with teams on enterprise applications - Participating in Agile team events and activities - Identifying technical steps required for story completion - Working with senior team members to evaluate product backlog items - Demonstrating business and domain knowledge to achieve outcomes - Analyzing user stories, performing task breakdown, and completing committed tasks - Understanding and using infrastructure to develop features - Following application design and architecture standards - Writing unit test plans and executing tests - Testing and implementing application releases - Providing ongoing support for applications in use - Acquiring new skills through training to be a T-Shaped team member - Contributing to sprint retrospective for team improvement - Following Bain development project process and standards - Writing technical documentation as required Research (10%): - Evaluating and employing new technologies for software applications - Researching and evaluating tools and technologies for future initiatives - Sharing concepts and technologies with the software development team Communication (10%): - Presenting technical findings and recommendations to the team - Communicating impediments clearly and ensuring understanding of completion criteria - Providing input during sprint retrospective for team improvement You should have knowledge and experience in frameworks such as .NET & .NET Core, languages like C# and T-SQL, web frameworks/libraries including Angular/React, JavaScript, HTML, CSS, Bootstrap, RDBMS like Microsoft SQL Server, cloud services such as Microsoft Azure, unit testing tools like XUnit and Jasmine, DevOps tools like GitActions, and more. Preferred skills include Python & GenAI. Qualifications: - Bachelor's or equivalent degree - 3-5 years of experience in software development - Experience in developing enterprise-scale applications - Strong knowledge of agile software development methodologies - Excellent communication, customer service, analytic, and problem-solving skills - Demonstrated T-shaped behavior to expedite delivery and manage conflicts/contingencies.,
Posted 2 weeks ago
2.0 - 6.0 years
0 Lacs
pune, maharashtra
On-site
As a Software Engineer II - Backend - Data Engineering at Blueshift, a venture-funded startup headquartered in San Francisco and expanding its team in Pune, India, you will play a crucial role in contributing to the development and maintenance of high-throughput, low-latency data pipelines. These pipelines are responsible for ingesting millions of user, product, and event data into our system in real-time and batch modes. In this position, you will collaborate closely with experienced engineers to create scalable, fault-tolerant infrastructure that supports our core personalization and decision-making systems. Your responsibilities will include integrating with external systems and data warehouses to ensure reliable data ingestion and export. You will have the opportunity to take ownership of key components and features, from design to deployment, and enhance your skills in system design, performance optimization, and production monitoring. Working with modern technologies like Rust, Elixir, and Ruby on Rails, you will gain hands-on experience in building backend systems that operate at scale. Key Responsibilities: - Design, implement, and maintain data ingestion pipelines for real-time and batch processing workloads. - Build and improve services for moving and transforming large volumes of data efficiently. - Develop new microservices and components to enhance data integration and platform scalability. - Integrate with third-party data warehouses and external systems for importing or exporting data. - Contribute to enhancing the performance, reliability, and observability of existing data systems. - Assist in diagnosing and resolving data-related issues reported by internal teams or customers. - Engage in code reviews, knowledge sharing, and continuous improvement of engineering practices within the team. Requirements: - Bachelors/Masters in Computer Science or related fields. - 2+ years of experience as a backend engineer. - Solid understanding of CS concepts, OOP, data structures, and concurrency. - Proficiency in relational databases and SQL. - Attention to detail, curiosity, proactiveness, willingness to learn, and a sense of ownership. - Good communication and coordination skills. - Experience with public clouds like AWS/Azure/GCP and modern languages like Elixir/Rust/Golang is advantageous. - Experience with NoSQL systems such as Cassandra/ScyllaDB, ElasticSearch, and REDIS is a plus. Perks and Benefits: - Competitive salary and stock option grants. - Comprehensive hospitalization, personal accident, and term insurance coverage. - Conveniently located in Baner, one of the best neighborhoods for tech startups. - Daily catered breakfast, lunch, snacks, and a well-stocked pantry. - Supportive team environment that values your well-being and growth opportunities.,
Posted 2 weeks ago
15.0 - 19.0 years
0 Lacs
chennai, tamil nadu
On-site
SquareShift is a Chennai-based, high-growth software services firm with a global presence in the US and Singapore. Established in 2019, we specialize in providing AI-led cloud and data solutions, including Google Cloud Platform consulting, Elasticsearch-based observability, data engineering, machine learning, and secure, scalable product engineering across various industries such as banking, retail, and hi-tech. As an official Google Cloud Partner and Elastic Partner, we assist enterprises in modernizing, innovating, and scaling through multi-cloud architectures, AI-powered platforms, and cutting-edge R&D initiatives. Our Innovation & Research Lab rapidly develops proofs-of-concept (POCs) in AI, analytics, and emerging technologies to drive business value with speed and precision. We are on a mission to become the most trusted partner in enterprise cloud adoption by solving complex challenges through innovation, execution excellence, and a culture that emphasizes learning, action, quality, and a strong sense of belonging. The Vice President / Head of Engineering at SquareShift will lead our global engineering organization from Chennai, defining the technology vision, driving innovation, and scaling high-performance teams. This role combines strategic leadership, deep technical expertise, and hands-on engagement with customers and partners to deliver cutting-edge, cloud-enabled, AI-driven solutions. Key Responsibilities: **Technology & Strategy** - Define and execute the engineering strategy aligned with corporate goals and evolving market trends. - Lead multi-stack engineering delivery across web, mobile, backend, AI/ML, and cloud platforms. - Oversee cloud-native architectures with a strong focus on Google Cloud Platform and ensure integration with AWS/Azure as necessary. **Leadership & Execution** - Build, mentor, and manage a world-class engineering team in Chennai and other global locations. - Drive agile delivery excellence, ensuring projects are delivered on time, within budget, and to world-class standards. - Promote DevOps, CI/CD, microservices, and secure-by-design development practices. **Innovation & R&D** - Lead the Proof of Concept (POC) & Research Department to explore and validate AI, ElasticSearch, data engineering, and emerging technologies. - Collaborate with Google Cloud and Elastics engineering teams on joint innovations. - Rapidly prototype, test, and deploy solutions for enterprise digital transformation. Requirements: **Client & Business Engagement** - Partner with sales, pre-sales, and delivery leadership to shape technical proposals and architectures. - Engage with C-level executives of client organizations to define and deliver technology roadmaps. - Own engineering resource planning, budget control, and capacity scaling. Required Skills & Qualifications: - 15+ years in software engineering, including 7+ years in senior leadership roles. - Proven success in leading multi-stack, multi-cloud teams using various technologies. - Hands-on expertise in Google Cloud Platform, ElasticSearch, and AI/ML systems. - Experience in POC and rapid prototyping environments. - Strong background in security, compliance, and enterprise architecture. - Excellent leadership, stakeholder management, and communication skills. Preferred Attributes: - Experience in partner ecosystem collaboration. - History of building and scaling R&D teams in India. - Exposure to digital transformation projects for enterprise clients. - Innovative, entrepreneurial, and technology-forward mindset. Why Join Us - Lead the engineering vision of a global GCP & Elastic partner from Chennai. - Directly influence AI, cloud, and search innovation for Fortune 500 clients. - Build and shape world-class engineering teams and research labs. - Work in a culture that celebrates innovation, speed, and excellence.,
Posted 2 weeks ago
5.0 - 9.0 years
0 Lacs
maharashtra
On-site
At Crimson Enago, we are dedicated to developing AI-powered tools and services that enhance the productivity of researchers and professionals. We understand that the stages of knowledge discovery, acquisition, creation, and dissemination can be cognitively demanding and interconnected. This is why our flagship products, Trinka and RAx, have been designed to streamline and accelerate these processes. Trinka, available at www.trinka.ai, is an AI-powered English grammar checker and language enhancement writing assistant specifically tailored for academic and technical writing. Developed by linguists, scientists, and language enthusiasts, Trinka is capable of identifying and correcting numerous intricate writing errors, ensuring your content is error-free. It goes beyond basic grammar correction by addressing contextual spelling mistakes, advanced grammar errors, enhancing vocabulary usage, and providing real-time writing suggestions. With subject-specific correction features, Trinka ensures that your writing is professional, concise, and engaging. Moreover, Trinka's Enterprise solutions offer unlimited access and customizable options to leverage its full capabilities. RAx, the first smart workspace available at https://raxter.io, is designed to assist researchers (including students, professors, and corporate researchers) in optimizing their research projects. Powered by proprietary AI algorithms and innovative problem-solving approaches, RAx aims to become the go-to workspace for research-intensive projects. By bridging information sources such as research papers, blogs, wikis, books, courses, and videos with user behaviors like reading, writing, annotating, and discussing, RAx uncovers new insights and opportunities in the academic realm. Our team consists of passionate researchers, engineers, and designers who share a common goal of revolutionizing research-intensive project workflows. We are committed to reducing cognitive load and facilitating the conversion of information into knowledge. The engineering team is dedicated to creating a scalable platform that manages vast amounts of data, implements AI processing, and caters to users worldwide. We firmly believe that research plays a crucial role in shaping a better world and strive to make the research process accessible and enjoyable. As an SDE-3 Fullstack at Trinka (https://trinka.ai), you will lead a team of web developers, drive end-to-end project development, and collaborate with key stakeholders such as the Engineering Manager, Principal Engineer, and Technical Project Manager. Your responsibilities will include hands-on coding, team leadership, hiring, training, and ensuring project delivery. We are looking for an SDE-3 Fullstack with at least 5 years of enterprise frontend-full-stack web experience, focusing on the AngularJS-Java-AWS stack. Ideal candidates will possess excellent research skills, advocate for comprehensive testing practices, demonstrate strong software design patterns, and exhibit expertise in optimizing scalable solutions. Additionally, experience with AWS technologies, database management, frontend development, and collaboration within a team-oriented environment is highly valued. If you meet the above requirements and are enthusiastic about contributing to a dynamic and innovative team, we invite you to join us in our mission to simplify and revolutionize research-intensive projects. Visit our websites at: https://www.trinka.ai/, https://raxter.io/, https://www.crimsoni.com/,
Posted 2 weeks ago
7.0 - 12.0 years
0 Lacs
karnataka
On-site
As a Senior Data Engineer with 7-12 years of experience, you will be based in Bangalore at Eco world with a hybrid work from office setup. Immediate joiners are preferred for this role. Your primary responsibility will be to work on processing large-scale process data for real-time analytics and ML model consumption. You should possess strong analytical skills to assess, engineer, and optimize data pipelines effectively. In terms of technical skills, you should be proficient in Advanced Python and PySpark. Experience working with cloud platforms such as AWS and Azure is required. Familiarity with frameworks like Lambda, Django, and Express will be beneficial. Knowledge of databases including PostgreSQL, MongoDB, and Elasticsearch is essential for this role. Your day-to-day tasks will include designing, developing, conducting automated unit testing using pyTest, and packaging Python applications. Candidates with experience in the industrial automation domain will be preferred. Proficiency in Agile methodology and the DevOps toolset (Git/Bitbucket, GitHub Copilot) is a plus. We are looking for individuals with strong problem-solving abilities, excellent communication skills, and the capability to work both independently and collaboratively within a team.,
Posted 2 weeks ago
5.0 - 9.0 years
0 Lacs
pune, maharashtra
On-site
You will have the opportunity to work in a dynamic environment where innovation and teamwork are key to supporting exciting missions around the world. The Qualys SaaS platform is database-centric, relying on technologies such as Oracle, Elasticsearch, Cassandra, Kafka, Redis, and Ceph to deliver uninterrupted service to customers globally. We are seeking an individual who is proficient in two or more of these technologies and is eager to learn new technologies while automating day-to-day tasks. As a Database Reliability Engineer (DBRE), your primary responsibility will be to ensure the smooth operation of production systems across various worldwide platforms. Collaboration with Development/Engineering, SRE, Infra, Networking, Support teams, and customers worldwide will be essential to provide 24x7 support for Qualys production applications and enhance service through database optimizations. This role will report to the Manager DBRE. Key Responsibilities: - Installation, configuration, and performance tuning of Oracle, Elasticsearch, Cassandra, Kafka, Redis, and Ceph in a multi-DC environment. - Identification of bottlenecks and performance tuning to maintain database integrity and security. - Monitoring performance and managing configuration parameters for fast query responses. - Installation, testing, and patching of new and existing databases. - Ensuring the proper functioning of storage, archiving, backup, and recovery procedures. - Understanding business applications to recommend and implement necessary database or architecture changes. Qualifications: - Minimum 5 years of experience managing SQL & No-SQL databases. - Extensive knowledge of diagnostic, monitoring, and troubleshooting tools to improve database performance. - Understanding of database, business applications, and operating system interrelationships. - Familiarity with backup and recovery scenarios and real-time data synchronization. - Strong problem-solving and analytical skills with the ability to collaborate across teams. - Experience working in a mid to large-size production environment. - Working knowledge of ansible, Jenkins, git, and Grafana is a plus.,
Posted 2 weeks ago
3.0 - 7.0 years
0 Lacs
pune, maharashtra
On-site
As a Full-stack Software Engineer at Autodesk's Fusion Operations team, you will be responsible for leading the design, implementation, testing, and maintenance of applications that provide a Device Independent experience. You will collaborate with cross-functional teams to ensure project goals and timelines are aligned. Utilizing your expertise in object-oriented programming principles, Java, and frontend development technologies, you will deliver robust, performant, and maintainable commercial applications. Your role will involve leveraging cloud technologies like AWS services for scalable solutions and participating in an on-call rotation to support production systems. You will also be involved in developing and maintaining automated tests to increase overall code coverage. Your familiarity with Agile methodologies, Scrum framework, and proficiency in Java frameworks such as Play or Spring will be advantageous in this position. Minimum qualifications include a Bachelor's degree in computer science or related field, along with 3+ years of industry experience. Proficiency in Java, JavaScript/HTML/CSS, and MySQL databases is required. Strong problem-solving skills, adaptability to changing priorities, and excellent communication skills in English are essential for success in this role. Preferred qualifications include experience with frontend frameworks like Vue.js or React, Elasticsearch, test automation tools, and CI/CD tools. A basic understanding of event-driven architecture principles and familiarity with CAD concepts related to Inventor, AutoCAD, Factory Design Utilities would be beneficial. Join Autodesk, where amazing things are created every day using innovative software. Become a part of a culture that guides the way we work, connect with customers, and show up in the world. Shape a better world designed and made for all by joining us in our mission to turn ideas into reality.,
Posted 2 weeks ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
57101 Jobs | Dublin
Wipro
24505 Jobs | Bengaluru
Accenture in India
19467 Jobs | Dublin 2
EY
17463 Jobs | London
Uplers
12745 Jobs | Ahmedabad
IBM
12087 Jobs | Armonk
Bajaj Finserv
11514 Jobs |
Amazon
11498 Jobs | Seattle,WA
Accenture services Pvt Ltd
10993 Jobs |
Oracle
10696 Jobs | Redwood City