Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
1.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Microsoft is a company where passionate innovators come to collaborate, envision what can be and take their careers further. This is a world of more possibilities, more innovation, more openness, and the sky is the limit thinking in a cloud-enabled world. Microsoft’s Azure Data engineering team is leading the transformation of analytics in the world of data with products like databases, data integration, big data analytics, messaging & real-time analytics, and business intelligence. The products our portfolio include Microsoft Fabric, Azure SQL DB, Azure Cosmos DB, Azure PostgreSQL, Azure Data Factory, Azure Synapse Analytics, Azure Service Bus, Azure Event Grid, and Power BI. Our mission is to build the data platform for the age of AI, powering a new class of data-first applications and driving a data culture. Within Azure Data, the Microsoft Fabric platform team builds and maintains the operating system and provides customers a unified data stack to run an entire data estate. The platform provides a unified experience, unified governance, enables a unified business model and a unified architecture. Within Fabric, Data Team builds telemetry pipelines for both internal and external customers. This is a mission critical service built for scale, compliance and data guarantees also leveraging our world class technologies like Power BI, Azure Data Explorer, EventHub, Azure Data Factory to ensure robust telemetry flow and ease of consumption for diagnostics and analytics. We do not just value differences or different perspectives. We seek them out and invite them in so we can tap into the collective power of everyone in the company. As a result, our customers are better served. Responsibilities Contribute to the development and maintenance of core services, APIs, and SDKs used to produce, pipeline, and consume telemetry. Collaborate with teammates and stakeholders to implement service data models using Azure and internal technologies. Support service improvements by completing scoped tasks and addressing bugs or technical debt. Participate in the design and development of features under the guidance of senior engineers. Help implement and iterate on solutions in collaboration with Product Management and partner teams. Join the on-call rotation with support and mentoring to maintain service health. Embody our culture and values. Embody our culture and values Qualifications Required/Minimum Qualifications Bachelor's Degree in Computer Science or related technical field AND 1+ year(s) technical engineering experience with coding in languages including, but not limited to C#, Java, C, C++, JavaScript, or Python OR equivalent experience. Other Requirements Ability to meet Microsoft, customer and/or government security screening requirements are required for this role. These requirements include, but are not limited to the following specialized security screenings: Microsoft Cloud Background Check: This position will be required to pass the Microsoft Cloud background check upon hire/transfer and every two years thereafter. Preferred/Additional Qualifications Excellent verbal and written communications skills including proven experience communicating clearly and precise with both technical and non-technical stakeholders at all levels. Equal Opportunity Employer (EOP) #azdat #azuredata #azdat #telemetry #data #infrastructure #powerbi #analytics #database #Kusto #messaging #synapse #spark #analytics #kafka #streamprocessing #eventhubs Microsoft is an equal opportunity employer. Consistent with applicable law, all qualified applicants will receive consideration for employment without regard to age, ancestry, citizenship, color, family or medical care leave, gender identity or expression, genetic information, immigration status, marital status, medical condition, national origin, physical or mental disability, political affiliation, protected veteran or military status, race, ethnicity, religion, sex (including pregnancy), sexual orientation, or any other characteristic protected by applicable local laws, regulations and ordinances. If you need assistance and/or a reasonable accommodation due to a disability during the application process, read more about requesting accommodations.
Posted 4 days ago
6.0 years
0 Lacs
Pune, Maharashtra, India
Remote
About Us MyRemoteTeam, Inc is a fast-growing distributed workforce enabler, helping companies scale with top global talent. We empower businesses by providing world-class software engineers, operations support, and infrastructure to help them grow faster and better. Job Title: Senior Java Spring Boot Developer Experience: 6+ Years Location: Any PAN India Location - Hybrid Working Model Job Description: We are seeking an experienced Senior Java Spring Boot Developer with 6+ years of hands-on experience in building scalable, high-performance microservices using Java, Spring Boot, and Spring JPA. The ideal candidate will have strong expertise in designing and developing RESTful APIs, microservices architecture, and cloud-native applications. As part of our team, you will work on enterprise-grade applications, collaborate with cross-functional teams, and contribute to the full software development lifecycle. Mandatory Skills: ✔ 6+ years of Java development (Java 8/11/17). ✔ Strong Spring Boot & Spring JPA experience. ✔ Microservices architecture (design, development, deployment). ✔ RESTful API development & integration. ✔ Database expertise (SQL/NoSQL – PostgreSQL, MySQL, MongoDB). ✔ Testing frameworks (JUnit, Mockito). ✔ Agile methodologies & CI/CD pipelines. Key Responsibilities: Design & Development: Develop high-performance, scalable microservices using Spring Boot. Design and implement RESTful APIs following best practices. Use Spring JPA/Hibernate for database interactions (SQL/NoSQL). Implement caching mechanisms (Redis, Ehcache) for performance optimization. Microservices Architecture: Build and maintain cloud-native microservices (Docker, Kubernetes). Integrate with message brokers (Kafka, RabbitMQ) for event-driven systems. Ensure fault tolerance, resilience, and scalability (Circuit Breaker, Retry Mechanisms). Database & Performance: Optimize database queries (PostgreSQL, MySQL, MongoDB). Implement connection pooling, indexing, and caching strategies. Monitor and improve application performance (JVM tuning, profiling). Testing & Quality Assurance: Write unit & integration tests (JUnit, Mockito, Test Containers). Follow TDD/BDD practices for robust code quality. Perform code reviews and ensure adherence to best practices. DevOps & CI/CD: Work with Docker, Kubernetes, and cloud platforms (AWS/Azure). Set up and maintain CI/CD pipelines (Jenkins, GitHub Actions). Automate deployments and monitoring (Prometheus, Grafana). Collaboration & Agile: Work in Agile/Scrum teams with daily standups, sprint planning, and retrospectives. Collaborate with frontend, QA, and DevOps teams for seamless delivery.
Posted 4 days ago
2.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Role: Backend Engineer (Node.js) Experience: 2 to 5 Years Duration: 3 to 6 Months (Contractual) What They'll Work On: • Build and maintain backend systems using Node.js Work on APIs, infrastructure, data pipelines, and backend architecture Ensure system performance, scalability, and reliability Own development end-to-end-coding, deployment, debugging, and improvement What We're Looking For: • Experience in backend development with Node.js (An added plus, if they worked on Shopify ecosystem) Strong in JavaScript/TypeScript and REST API development Experience with MongoDB, PostgreSQL (or similar DBs) Experience with Event Driven System like Kafka/SQS Familiar with Git, CI/CD, and cloud platforms (preferably AWS) • Exposure to Docker, Redis, or container tools is a plus Should be hands-on, accountable, and ready to join immediately Evaluation Process: • Minimum 2 technical rounds (includes live coding + system discussion)
Posted 4 days ago
10.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Principal Data Engineer _ Hyderabad (Onsite) Job Title. Principal Data Engineer Work Location. Hyderabad (Onsite) Experience. 10+ Years Job Description: 10+ years of experience in data engineering, with at least 3 years in a technical leadership role. Strong expertise in SQL, Python or Scala, and modern ETL/ELT frameworks. Deep knowledge of data warehousing solutions (e.g., Snowflake, Redshift, BigQuery) and distributed systems (e.g., Hadoop, Spark). Proven experience with cloud platforms (AWS, Azure, or GCP) and associated data services (e.g., S3, Glue, Dataflow, Databricks). Hands-on experience with streaming platforms such as Kafka, Flink, or Kinesis. Solid understanding of data modeling, data lakes, data governance, and security. Excellent communication, leadership, and stakeholder management skills. Preferred Qualifications: Exposure to tools like Airflow, dbt, Terraform, or Kubernetes. Familiarity with data cataloging and lineage tools (e.g., Alation, Collibra). Domain experience in [e.g., Banking, Healthcare, Finance, E-commerce] is a plus. Experience in designing data platforms for AI/ML workloads.
Posted 4 days ago
5.0 years
0 Lacs
India
Remote
About KYFEX: KYFEX is a leading AI consulting firm, dedicated to harnessing the power of artificial intelligence to revolutionize business operations across the globe. Our expertise in Large Language Models (LLMs) positions us at the cutting edge of AI technology, enabling us to offer unparalleled solutions to our clients. As we continue to grow, we're seeking a skilled Remote Principal Data Engineer to lead our efforts in managing and optimizing data processes for LLM training. Job Responsibilities: Lead the design and implementation of scalable data pipelines for the training of open-source LLMs. Work closely with AI researchers and engineers to understand data requirements and ensure efficient data processing for AI model training. Develop and maintain robust data storage solutions, ensuring data integrity, security, and compliance. Optimize data retrieval and processing techniques to reduce training time and improve model performance. Implement monitoring, logging, and alert systems to ensure high availability and performance of data systems. Collaborate with cross-functional teams to integrate LLM solutions into client projects, providing expert advice on data engineering best practices. Stay abreast of the latest developments in data engineering and LLM technologies, continuously improving KYFEX’s data strategies. Minimum Requirements: Bachelor's degree in Computer Science, Engineering, or a related field. 5+ years of experience in data engineering, with a proven track record of building and managing large-scale data pipelines. Strong proficiency in programming languages such as Python or Scala. Extensive experience with big data technologies (e.g., Hadoop, Spark, Kafka) and cloud services (AWS, Google Cloud, Azure). Demonstrated experience in data modeling, ETL development, and data warehousing. Knowledge of machine learning concepts and experience supporting data needs for AI/ML projects. Excellent problem-solving skills and the ability to work independently in a fully remote environment. Preferred Skills: Master’s degree or Ph.D. in a related field. Experience with open-source LLMs and understanding of NLP data processing. Familiarity with containerization and orchestration technologies (Docker, Kubernetes). Experience in implementing data security and privacy practices. Strong communication skills, with the ability to lead teams and collaborate effectively with stakeholders. Why Join KYFEX? Work at the forefront of AI technology with a team of experts passionate about innovation. Enjoy the flexibility and benefits of a fully remote position. Engage in challenging and meaningful projects that have a real-world impact. Benefit from a culture of continuous learning, professional development, and collaborative achievement. To Apply: Interested candidates are invited to submit their resume, a cover letter detailing their experience with LLMs, and any relevant project samples or GitHub links to careers@kyfex.com with "Remote Principal Data Engineer Application" as the subject line. KYFEX is committed to diversity and inclusion and encourages applications from all qualified individuals, including those from diverse backgrounds and underrepresented groups.
Posted 4 days ago
11.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Location : Pune Mode of Work : Full-time, On-site Experience required : 11+ years Who you are : We are looking for a hands-on Java Architecture expert with strong analytical and problem-solving skills. The ideal candidate will have extensive experience designing and building scalable, resilient systems using modern Java technologies. Key Responsibilities Design and implement scalable, secure, and high-performance backend solutions. Architect microservices-based applications leveraging Spring Boot 3.x.x and Java 17+ Provide leadership in implementing service mesh patterns and Kubernetes orchestration. Define and implement caching strategies using Hazelcast or Apache Ignite. Develop APIs using REST principles and reactive programming paradigms. Design solutions around event-driven architecture for real-time processing. Ensure best practices in software development, CI/CD, and DevOps. Collaborate with cross-functional teams including DevOps, QA, and Product Owners. Must-Have Skills Java 17+ Spring Boot 3.x.x Service Mesh Kubernetes Caching (Hazelcast / Apache Ignite) Document Databases (MongoDB, Couchbase, etc.) Event-Driven Design (Kafka, RabbitMQ, etc.) RESTful APIs design and development Reactive Programming (Project Reactor, WebFlux) Hibernate / JPA Good to have Docker AWS PCF Any frontend technology What We Offer Leadership & Impact : Drive impactful projects, integrate AI, and leave a mark. Growth & Learning : Continuous learning, SDLC mastery, career advancement. Recognition & Excellence : Celebrate dedication, rewards for contributions. Global Influence : Lead initiatives, reshape financial accessibility worldwide. Benefits Work-Life Harmony : Flexible schedules prioritize well-being. Rewarding Performance : Performance-based bonuses, annual rewards. Comprehensive Well-being : Provident Fund, health insurance. (ref:hirist.tech)
Posted 4 days ago
0 years
0 Lacs
Pune, Maharashtra, India
On-site
Syensqo is all about chemistry. We’re not just referring to chemical reactions here, but also to the magic that occurs when the brightest minds get to work together. This is where our true strength lies. In you. In your future colleagues and in all your differences. And of course, in your ideas to improve lives while preserving our planet’s beauty for the generations to come. Join us at Syensqo, where our IT team is gearing up to enhance its capabilities. We play a crucial role in the group's transformation—accelerating growth, reshaping progress, and creating sustainable shared value. IT team is making operational adjustments to supercharge value across the entire organization. Here at Syensqo, we're one strong team! Our commitment to accountability drives us as we work hard to deliver value for our customers and stakeholders. In our dynamic and collaborative work environment, we add a touch of enjoyment while staying true to our motto: reinvent progress. Come be part of our transformation journey and contribute to the change as a future team member. We are looking for: As a Data/ML Engineer, you will play a central role in defining, implementing, and maintaining cloud governance frameworks across the organization. You will collaborate with cross-functional teams to ensure secure, compliant, and efficient use of cloud resources for data and machine learning workloads. Your expertise in full-stack automation, DevOps practices, and Infrastructure as Code (IaC) will drive the standardization and scalability of our cloud-based data and ML platforms. Key requirements are: Ensuring cloud data governance Define and maintain central cloud governance policies, standards, and best practices for data, AI and ML workloads Ensure compliance with security, privacy, and regulatory requirements across all cloud environments Monitor and optimize cloud resource usage, cost, and performance for data, AI and ML workloads Design and Implement Data Pipelines Co-develop, co-construct, test, and maintain highly scalable and reliable data architectures, including ETL processes, data warehouses, and data lakes with the Data Platform Team Build and Deploy ML Systems Co-design, co-develop, and deploy machine learning models and associated services into production environments, ensuring performance, reliability, and scalability Infrastructure Management Manage and optimize cloud-based infrastructure (e.g., AWS, Azure, GCP) for data storage, processing, and ML model serving Collaboration Work collaboratively with data scientists, ML engineers, security and business stakeholders to align cloud governance with organizational needs Provide guidance and support to teams on cloud architecture, data management, and ML operations. Work collaboratively with other teams to transition prototypes and experimental models into robust, production-ready solutions Data Governance and Quality: Implement best practices for data governance, data quality, and data security to ensure the integrity and reliability of our data assets. Performance and Optimisation: Identify and implement performance improvements for data pipelines and ML models, optimizing for speed, cost-efficiency, and resource utilization. Monitoring and Alerting Establish and maintain monitoring, logging, and alerting systems for data pipelines and ML models to proactively identify and resolve issues Tooling and Automation Design and implement full-stack automation for data pipelines, ML workflows, and cloud infrastructure Build and manage cloud infrastructure using IaC tools (e.g., Terraform, CloudFormation) Develop and maintain CI/CD pipelines for data and ML projects Promote DevOps culture and best practices within the organization Develop and maintain tools and automation scripts to streamline data operations, model training, and deployment processes Stay Current on new ML / AI trends: Keep abreast of the latest advancements in data engineering, machine learning, and cloud technologies, evaluating and recommending new tools and approach Document processes, architectures, and standards for knowledge sharing and onboarding Education and experience Education: Bachelor's or Master's degree in Computer Science, Data Science, Engineering, or a related quantitative field. (Relevant work experience may be considered in lieu of a degree). Programming: Strong proficiency in Python (essential) and experience with other relevant languages like Java, Scala, or Go. Data Warehousing/Databases: Solid understanding and experience with relational databases (e.g., PostgreSQL, MySQL) and NoSQL databases (e.g., MongoDB, Cassandra). Experience with data warehousing solutions (e.g., Snowflake, Redshift, BigQuery) is highly desirable. Big Data Technologies: Hands-on experience with big data processing frameworks (e.g., Spark, Flink, Hadoop). Cloud Platforms: Experience with at least one major cloud provider (AWS, Azure, or GCP) and their relevant data and ML services (e.g., S3, EC2, Lambda, EMR, SageMaker, Dataflow, BigQuery, Azure Data Factory, Azure ML). ML Concepts: Fundamental understanding of machine learning concepts, algorithms, and workflows. MLOps Principles: Familiarity with MLOps principles and practices for deploying, monitoring, and managing ML models in production. Version Control: Proficiency with Git and collaborative development workflows. Problem-Solving: Excellent analytical and problem-solving skills with a strong attention to detail. Communication: Strong communication skills, able to articulate complex technical concepts to both technical and non-technical stakeholders. Bonus Points (Highly Desirable Skills & Experience): Experience with containerisation technologies (Docker, Kubernetes). Familiarity with CI/CD pipelines for data and ML deployments. Experience with stream processing technologies (e.g., Kafka, Kinesis). Knowledge of data visualization tools (e.g., Tableau, Power BI, Looker). Contributions to open-source projects or a strong portfolio of personal projects. Experience with [specific domain knowledge relevant to your company, e.g., financial data, healthcare data, e-commerce data]. Language skills Fluent English What’s in it for the candidate Be part of a highly motivated team of explorers Help make a difference and thrive in Cloud and AI technology Chart your own course and build a fantastic career Have fun and enjoy life with an industry leading remuneration pack About Us Syensqo is a science company developing groundbreaking solutions that enhance the way we live, work, travel and play. Inspired by the scientific councils which Ernest Solvay initiated in 1911, we bring great minds together to push the limits of science and innovation for the benefit of our customers, with a diverse, global team of more than 13,000 associates. Our solutions contribute to safer, cleaner, and more sustainable products found in homes, food and consumer goods, planes, cars, batteries, smart devices and health care applications. Our innovation power enables us to deliver on the ambition of a circular economy and explore breakthrough technologies that advance humanity. At Syensqo, we seek to promote unity and not uniformity. We value the diversity that individuals bring and we invite you to consider a future with us, regardless of background, age, gender, national origin, ethnicity, religion, sexual orientation, ability or identity. We encourage individuals who may require any assistance or accommodations to let us know to ensure a seamless application experience. We are here to support you throughout the application journey and want to ensure all candidates are treated equally. If you are unsure whether you meet all the criteria or qualifications listed in the job description, we still encourage you to apply.
Posted 4 days ago
3.0 - 5.0 years
0 Lacs
Ahmedabad, Gujarat, India
On-site
Position Title : Full stack Engineer (Golang) Experience : 3-5 years Location : Ahmedabad About Motadata Motadata is a renowned IT monitoring and management software company that has been transforming how businesses manage their ITOps since its inception. Our vision is to revolutionize the way organizations extract valuable insights from their IT networks. Bootstrapped since inception, Motadata has built up a formidable product suite comprising cutting-edge solutions, empowering enterprises to make informed decisions and optimize their IT infrastructure. As a market leader, we take pride in our ability to collect and analyze data from various sources, in any format, providing a unified view of IT monitoring data. We are customer-centric and continuously strive for excellence, enabling organizations to tackle complex IT challenges with ease. Follow us on LinkedIn and be part of an innovative team. Were looking for folks who want to join us in unlocking the true potential of IT together! Position Overview We are looking for a highly skilled Fullstack Developer (Golang) who excels in both backend and frontend development. The ideal candidate will have strong hands-on experience with Golang, event/message-driven architecture, and proficiency in modern frontend technologies such as React.js, Vue.js, or Next.js. A background in network programming is a plus. Role & Responsibility Develop and maintain backend services using Golang Design microservices with a focus on performance, scalability, and clean architecture Build RESTful APIs and ensure their smooth integration with frontend components Utilize concurrency and parallelism in Golang using goroutines, channels, etc. Implement and maintain event-driven/message-driven workflows using Kafka, RabbitMQ, or ActiveMQ Work with SQL/NoSQL databases such as PostgreSQL, MongoDB, or Redis Build responsive, interactive user interfaces using React.js, Vue.js, or Next.js Integrate frontend components with backend services to ensure seamless functionality Follow UI/UX best practices and collaborate with design teams for implementation Ensure application performance, responsiveness, and cross-browser compatibility Participate in Agile practices like sprint planning, daily stand-ups, and retrospectives Collaborate with QA, product managers, and fellow engineers to deliver high-quality features Write clean, maintainable, and testable code Continuously learn and apply modern development practices and tools Skills And Qualification Bachelors or Masters degree in Computer Science, Engineering, or a related field 35 years of fullstack development experience Minimum 2 years of hands-on experience with Golang Proficiency in React.js, Vue.js, or Next.js Strong understanding of event-driven/message-driven architecture Experience with REST APIs, microservices, and backend scalability Knowledge of network programming is a plus Familiarity with databases: PostgreSQL, MongoDB, or Redis Strong understanding of version control (Git) and Agile methodologies Excellent analytical, problem-solving, and communication skills Life At Motadata Work Life Balance : We Encourage it. Flexible working hours 5 days a week (ref:hirist.tech)
Posted 5 days ago
2.0 years
0 Lacs
Ahmedabad, Gujarat, India
On-site
We are seeking an experienced Senior MEAN Stack Developer with 2-4 years of hands-on experience in designing, developing, and maintaining scalable web applications. The ideal candidate should have expertise in MongoDB, Express.js, Angular, and Node.js (MEAN stack), with strong problem-solving abilities and leadership skills. You will work closely with cross-functional teams to build high-performance applications and contribute to architectural Responsibilities : Design, develop, and deploy full-stack web applications using the MEAN stack. Architect and optimize scalable, high-performance web applications. Develop RESTful APIs and GraphQL services for seamless integration with frontend applications. Implement authentication and authorization mechanisms (JWT, OAuth, Role-Based Access Control). Optimize database queries and performance in MongoDB using Mongoose. Mentor and guide junior developers, conducting code reviews and technical discussions. Integrate third-party APIs, cloud services, and DevOps solutions for automation and deployment. Implement CI/CD pipelines and ensure best practices for software development and deployment. Troubleshoot complex issues, debug applications, and improve code quality. Stay updated with emerging technologies and contribute to the continuous improvement of development Skills & Qualifications : 3-5 years of experience in MEAN stack development. Strong proficiency in Angular 15+ and frontend optimization techniques. Advanced knowledge of Node. js and Express.js, including asynchronous programming and event-driven architecture. Expertise in MongoDB, MySQL & PostgreSQL (Schema Design, Indexing, Aggregation Framework, Replication, and Sharding). Experience in building microservices-based architectures. Strong understanding of Docker, Kubernetes, and CI/CD pipelines. Proficiency in Git, GitHub, or GitLab for version control. Experience with message queues (Redis, RabbitMQ, Kafka) for distributed systems. Solid understanding of WebSockets, real-time data processing, and caching strategies. Hands-on experience in unit testing, integration testing, and TDD. Strong analytical and debugging skills with experience in performance optimization. Excellent communication and leadership skills. Basic understanding and hands on of HTML & CSS Skills : Experience with GraphQL API development. Familiarity with AWS (EC2, S3, Lambda), Azure, or Google Cloud Platform. Knowledge of Serverless architecture and cloud functions. Knowledge of Nextjs. Knowledge of Reactjs. Experience in Angular Universal (Server-Side Rendering SSR). Knowledge of Nginx, PM2, and load balancing strategies. Exposure to AI/ML-based applications using Node. js. Boost the productivity using AI tools like ChatGPT etc. (ref:hirist.tech)
Posted 5 days ago
10.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
We are looking for a highly experienced and hands-on Technical Architect with a strong background in backend development using Java or Node.js . As a backend architect, you will design scalable and high-performance systems, set coding standards, mentor engineers, and play a key role in shaping the technical direction of our backend services. Responsibilities Design and architect backend systems with a focus on scalability, performance, reliability, and security. Lead the evaluation and selection of backend technologies, frameworks, and tools. Collaborate with product managers, frontend teams, and DevOps to deliver robust and scalable solutions. Provide technical leadership and mentorship to development teams. Create high-level and detailed design documents, architecture diagrams, and solution blueprints. Perform code reviews and enforce best practices in development and architecture. Define and drive architectural principles, technical standards, and development processes. Ensure smooth integration with third-party APIs and services. Contribute to capacity planning and performance tuning of backend services. Stay up-to-date with emerging technologies and industry trends. Optimize infrastructure for cost, performance, and scalability. Requirements 10+ years of backend development experience with at least 4+ years in an architectural or lead role. Expertise in Java (Spring Boot) or Node.js (Express/NestJS). Strong knowledge of microservices architecture, RESTful APIs, and asynchronous messaging (Kafka/RabbitMQ). Experience with databases - both SQL (PostgreSQL, MySQL) and NoSQL (MongoDB, Redis). Proficient in CI/CD pipelines, containerization (Docker), and orchestration (Kubernetes). Strong understanding of software design patterns, clean architecture, and system integration. Experience in cloud platforms like AWS, GCP, or Azure. (ref:hirist.tech)
Posted 5 days ago
10.0 years
0 Lacs
Delhi, India
Remote
JOB_POSTING-3-72216-3 Job Description Role Title: VP, Data Engineering Tech Lead (L12) Company Overview COMPANY OVERVIEW: Synchrony (NYSE: SYF) is a premier consumer financial services company delivering one of the industry’s most complete digitally enabled product suites. Our experience, expertise and scale encompass a broad spectrum of industries including digital, health and wellness, retail, telecommunications, home, auto, outdoors, pet and more. We have recently been ranked #5 among India’s Best Companies to Work for 2023, #21 under LinkedIn Top Companies in India list, and received Top 25 BFSI recognition from Great Place To Work India. We have been ranked Top 5 among India’s Best Workplaces in Diversity, Equity, and Inclusion, and Top 10 among India’s Best Workplaces for Women in 2022. We offer 100% Work from Home flexibility for all our Functional employees and provide some of the best-in-class Employee Benefits and Programs catering to work-life balance and overall well-being. In addition to this, we also have Regional Engagement Hubs across India and a co-working space in Bangalore Organizational Overview Organizational Overview: This role will be part of the Data Architecture & Analytics group part of CTO organization Data team is responsible for designing and developing scalable data pipelines for efficient data ingestion, transformation, and loading(ETL). Collaborating with cross-functional teams to integrate new data sources and ensure data quality and consistency. Building and maintaining data models to facilitate data access and analysis by Data Scientists and Analysts. Responsible for the SYF public cloud platform & services. Govern health, performance, capacity, and costs of resources and ensure adherence to service levels Build well defined processes for cloud application development and service enablement. Role Summary/Purpose We are seeking a highly skilled Cloud Technical Lead with expertise in Data Engineering who will work in multi-disciplinary environments harnessing data to provide valuable impact for our clients. The Cloud Technical Lead will work closely with technology and functional teams to drive migration of legacy on-premises data systems/platforms to cloud-based solutions. The successful candidate will need to develop intimate knowledge of SYF key data domains (originations, loan activity, collection, etc.) and maintain a holistic view across SYF functions to minimize redundancies and optimize the analytics environment. Key Responsibilities Manage end-to-end project lifecycle, including planning, execution, and delivery of cloud-based data engineering projects. Providing guidance on suitable options, designing, and creating data pipeline for the analytical solutions across data lake, data warehouses and cloud implementations. Architect and design robust data pipelines and ETL processes leveraging Ab Initio and Amazon Redshift. Ensure data integration, transformation, and storage process are optimized for scalability and performance in cloud environment. Ensure data security, governance, and compliance in the cloud infrastructure. Provide leadership and guidance to data engineering teams, ensuring best practices are followed. Ensure timely delivery of high-quality solutions in an Agile environment. Required Skills/Knowledge Minimum 10+ years of experience with Bachelor's degree in Computer Science or similar technical field of study or in lieu of a degree 12+ years of relevant experience Minimum 10+ years of experience in managing large scale data platforms (Data warehouse/Data Late/Cloud) environments Minimum 10+ years of financial services experience Minimum 6+ years of experience working with Data Warehouses/Data Lake/Cloud. 6+ years’ of hards-on programming experience in ETL tools - Ab Initio or Informatica highly preferred. Be able to read and reverse engineer the logic in Ab Initio graphs. Hands on experience with cloud platforms such as S3, Redshift, Snowflake, etc. Working knowledge of Hive, Spark, Kafka and other data lake technologies. Strong familiarity with data governance, data lineage, data processes, DML, and data architecture control execution. Experience to analyze system requirements and implement migration methods for existing data. Ability to develop and maintain strong collaborative relationships at all levels across IT and the business. Excellent written and oral communication skills, along with a strong ability to lead and influence others. Experience working iteratively in a fast-paced agile environment. Demonstrated ability to drive change and work effectively across business and geographical boundaries. Expertise in evaluating technology and solution engineering, with strong focus on architecture and deployment of new technology Superior decision-making, client relationship, and vendor management skills. Desired Skills/Knowledge Prior work experience in a credit card/banking/fintech company. Experience dealing with sensitive data in a highly regulated environment. Demonstrated implementation of complex and innovative solutions. Agile experience using JIRA or similar Agile tools. Eligibility Criteria Bachelor's degree in Computer Science or similar technical field of study (Masters degree preferred) Minimum 12+ years of experience in managing large scale data platforms (Data warehouse/Data Late/Cloud) environments Minimum 12+ years of financial services experience Minimum 8+ years of experience working with Oracle Data Warehouses/Data Lake/Cloud 8+ years’ of programming experience in ETL tools - Ab Initio or Informatica highly preferred. Be able to read and reverse engineer the logic in Ab Initio graphs. Hands on experience with cloud platforms such as S3, Redshift, Snowflake, etc. Rigorous data analysis through SQL in Oracle and various Hadoop technologies. Involvement in large scale data analytics migration from on premises to a public cloud Strong familiarity with data governance, data lineage, data processes, DML, and data architecture control execution. Experience to analyze system requirements and implement migration methods for existing data. Excellent written and oral communication skills, along with a strong ability to lead and influence others. Experience working iteratively in a fast-paced agile environment. Work Timings: 3:00 PM IST to 12:00 AM IST (WORK TIMINGS: This role qualifies for Enhanced Flexibility and Choice offered in Synchrony India and will require the incumbent to be available between 06:00 AM Eastern Time – 11:30 AM Eastern Time (timings are anchored to US Eastern hours and will adjust twice a year locally). This window is for meetings with India and US teams. The remaining hours will be flexible for the employee to choose. Exceptions may apply periodically due to business needs. Please discuss this with the hiring manager for more details .) For Internal Applicants Understand the criteria or mandatory skills required for the role, before applying Inform your manager and HRM before applying for any role on Workday Ensure that your professional profile is updated (fields such as education, prior experience, other skills) and it is mandatory to upload your updated resume (Word or PDF format) Must not be any corrective action plan (First Formal/Final Formal, PIP) L10+ Employees who have completed 18 months in the organization and 12 months in current role and level are only eligible. L10+ Employees can apply Level / Grade : 12 Job Family Group Information Technology
Posted 5 days ago
0 years
0 Lacs
Pune, Maharashtra, India
On-site
Join us as a Release Train Manager. At Barclays, we don’t just embrace change – we drive it and help craft innovative solutions. You'll harness cutting-edge technology to revolutionise our digital offerings, ensuring unapparelled customer experiences. You will be leading a delivery of event streaming (Kafka, Pulsar, Kinesis, etc.) and operational data store (Cassandra, MongoDB, Redis, etc.) solutions, preferably in banking or financial services. To be successful as a Release Train Manager, you should have experience with: Strong experience as Release Train Manager, Scrum Master, or Agile Program Management roles. Proven experience leading delivery of event streaming (Kafka, Pulsar, Kinesis, etc.) and operational data store (Cassandra, MongoDB, Redis, etc.) solutions, preferably in banking or financial services. Strong understanding of Agile/SAFe methodologies and experience facilitating Agile ceremonies. Familiarity with event-driven architectures, real-time data processing, and operational data stores. Experience with cloud platforms (AWS, Azure, GCP) and DevOps practices. Excellent communication, facilitation, and stakeholder management skills. Proficiency with Agile tools such as JIRA, Confluence, Azure DevOps, etc. Knowledge of regulatory, compliance, and security requirements in the banking industry. Some Other Highly Valued Skills May Include SAFe Release Train Engineer (RTE) certification or equivalent Agile certifications. Experience with large-scale event streaming or operational data store programs. Exposure to data governance, data privacy, and real-time analytics frameworks. You may be assessed on the key critical skills relevant for success in role, such as risk and controls, change and transformation, business acumen strategic thinking and digital and technology, as well as job-specific technical skills. This role will be based out of Pune. Purpose of the role To manage the efficient delivery of large-scale technical projects and capabilities across the bank and collaborate with internal and external stakeholders to understand their needs and expectations throughout the software product lifecycle, adhering to agreed time, budget and quality requirements. Accountabilities Management of the delivery, resource allocation and improvement of complex technical project capabilities across the bank using Agile/Scrum methodologies for planning, stand-ups, demos, retrospectives and the execution of sprints. Collaboration with customers and business teams to manage and implement customer trails to support the proposal of technically feasible solutions, and effort and timeline expectations. Collaboration with software engineers, quality teams, product managers and other engineering teams to deliver high-quality products and features through the software project lifecycle aligned to the banks objectives, regulation and security policies. Identification, assessment, and mitigation of risks associated with technical projects to prepare the bank for potential challenges. Management of change requests and communication with stakeholders throughout the project lifecycle. Management of vendor relations involved in technical projects to ensure they deliver according to agreed terms. Stay abreast of the latest industry technology trends and technologies, to evaluate and adopt new approaches to improving deliver outcomes and to foster a culture of continuous learning, technical excellence and growth. Vice President Expectations To contribute or set strategy, drive requirements and make recommendations for change. Plan resources, budgets, and policies; manage and maintain policies/ processes; deliver continuous improvements and escalate breaches of policies/procedures.. If managing a team, they define jobs and responsibilities, planning for the department’s future needs and operations, counselling employees on performance and contributing to employee pay decisions/changes. They may also lead a number of specialists to influence the operations of a department, in alignment with strategic as well as tactical priorities, while balancing short and long term goals and ensuring that budgets and schedules meet corporate requirements.. If the position has leadership responsibilities, People Leaders are expected to demonstrate a clear set of leadership behaviours to create an environment for colleagues to thrive and deliver to a consistently excellent standard. The four LEAD behaviours are: L – Listen and be authentic, E – Energise and inspire, A – Align across the enterprise, D – Develop others.. OR for an individual contributor, they will be a subject matter expert within own discipline and will guide technical direction. They will lead collaborative, multi-year assignments and guide team members through structured assignments, identify the need for the inclusion of other areas of specialisation to complete assignments. They will train, guide and coach less experienced specialists and provide information affecting long term profits, organisational risks and strategic decisions.. Advise key stakeholders, including functional leadership teams and senior management on functional and cross functional areas of impact and alignment. Manage and mitigate risks through assessment, in support of the control and governance agenda. Demonstrate leadership and accountability for managing risk and strengthening controls in relation to the work your team does. Demonstrate comprehensive understanding of the organisation functions to contribute to achieving the goals of the business. Collaborate with other areas of work, for business aligned support areas to keep up to speed with business activity and the business strategies. Create solutions based on sophisticated analytical thought comparing and selecting complex alternatives. In-depth analysis with interpretative thinking will be required to define problems and develop innovative solutions. Adopt and include the outcomes of extensive research in problem solving processes. Seek out, build and maintain trusting relationships and partnerships with internal and external stakeholders in order to accomplish key business objectives, using influencing and negotiating skills to achieve outcomes. All colleagues will be expected to demonstrate the Barclays Values of Respect, Integrity, Service, Excellence and Stewardship – our moral compass, helping us do what we believe is right. They will also be expected to demonstrate the Barclays Mindset – to Empower, Challenge and Drive – the operating manual for how we behave.
Posted 5 days ago
10.0 years
0 Lacs
Kolkata, West Bengal, India
Remote
JOB_POSTING-3-72216-2 Job Description Role Title: VP, Data Engineering Tech Lead (L12) Company Overview COMPANY OVERVIEW: Synchrony (NYSE: SYF) is a premier consumer financial services company delivering one of the industry’s most complete digitally enabled product suites. Our experience, expertise and scale encompass a broad spectrum of industries including digital, health and wellness, retail, telecommunications, home, auto, outdoors, pet and more. We have recently been ranked #5 among India’s Best Companies to Work for 2023, #21 under LinkedIn Top Companies in India list, and received Top 25 BFSI recognition from Great Place To Work India. We have been ranked Top 5 among India’s Best Workplaces in Diversity, Equity, and Inclusion, and Top 10 among India’s Best Workplaces for Women in 2022. We offer 100% Work from Home flexibility for all our Functional employees and provide some of the best-in-class Employee Benefits and Programs catering to work-life balance and overall well-being. In addition to this, we also have Regional Engagement Hubs across India and a co-working space in Bangalore Organizational Overview Organizational Overview: This role will be part of the Data Architecture & Analytics group part of CTO organization Data team is responsible for designing and developing scalable data pipelines for efficient data ingestion, transformation, and loading(ETL). Collaborating with cross-functional teams to integrate new data sources and ensure data quality and consistency. Building and maintaining data models to facilitate data access and analysis by Data Scientists and Analysts. Responsible for the SYF public cloud platform & services. Govern health, performance, capacity, and costs of resources and ensure adherence to service levels Build well defined processes for cloud application development and service enablement. Role Summary/Purpose We are seeking a highly skilled Cloud Technical Lead with expertise in Data Engineering who will work in multi-disciplinary environments harnessing data to provide valuable impact for our clients. The Cloud Technical Lead will work closely with technology and functional teams to drive migration of legacy on-premises data systems/platforms to cloud-based solutions. The successful candidate will need to develop intimate knowledge of SYF key data domains (originations, loan activity, collection, etc.) and maintain a holistic view across SYF functions to minimize redundancies and optimize the analytics environment. Key Responsibilities Manage end-to-end project lifecycle, including planning, execution, and delivery of cloud-based data engineering projects. Providing guidance on suitable options, designing, and creating data pipeline for the analytical solutions across data lake, data warehouses and cloud implementations. Architect and design robust data pipelines and ETL processes leveraging Ab Initio and Amazon Redshift. Ensure data integration, transformation, and storage process are optimized for scalability and performance in cloud environment. Ensure data security, governance, and compliance in the cloud infrastructure. Provide leadership and guidance to data engineering teams, ensuring best practices are followed. Ensure timely delivery of high-quality solutions in an Agile environment. Required Skills/Knowledge Minimum 10+ years of experience with Bachelor's degree in Computer Science or similar technical field of study or in lieu of a degree 12+ years of relevant experience Minimum 10+ years of experience in managing large scale data platforms (Data warehouse/Data Late/Cloud) environments Minimum 10+ years of financial services experience Minimum 6+ years of experience working with Data Warehouses/Data Lake/Cloud. 6+ years’ of hards-on programming experience in ETL tools - Ab Initio or Informatica highly preferred. Be able to read and reverse engineer the logic in Ab Initio graphs. Hands on experience with cloud platforms such as S3, Redshift, Snowflake, etc. Working knowledge of Hive, Spark, Kafka and other data lake technologies. Strong familiarity with data governance, data lineage, data processes, DML, and data architecture control execution. Experience to analyze system requirements and implement migration methods for existing data. Ability to develop and maintain strong collaborative relationships at all levels across IT and the business. Excellent written and oral communication skills, along with a strong ability to lead and influence others. Experience working iteratively in a fast-paced agile environment. Demonstrated ability to drive change and work effectively across business and geographical boundaries. Expertise in evaluating technology and solution engineering, with strong focus on architecture and deployment of new technology Superior decision-making, client relationship, and vendor management skills. Desired Skills/Knowledge Prior work experience in a credit card/banking/fintech company. Experience dealing with sensitive data in a highly regulated environment. Demonstrated implementation of complex and innovative solutions. Agile experience using JIRA or similar Agile tools. Eligibility Criteria Bachelor's degree in Computer Science or similar technical field of study (Masters degree preferred) Minimum 12+ years of experience in managing large scale data platforms (Data warehouse/Data Late/Cloud) environments Minimum 12+ years of financial services experience Minimum 8+ years of experience working with Oracle Data Warehouses/Data Lake/Cloud 8+ years’ of programming experience in ETL tools - Ab Initio or Informatica highly preferred. Be able to read and reverse engineer the logic in Ab Initio graphs. Hands on experience with cloud platforms such as S3, Redshift, Snowflake, etc. Rigorous data analysis through SQL in Oracle and various Hadoop technologies. Involvement in large scale data analytics migration from on premises to a public cloud Strong familiarity with data governance, data lineage, data processes, DML, and data architecture control execution. Experience to analyze system requirements and implement migration methods for existing data. Excellent written and oral communication skills, along with a strong ability to lead and influence others. Experience working iteratively in a fast-paced agile environment. Work Timings: 3:00 PM IST to 12:00 AM IST (WORK TIMINGS: This role qualifies for Enhanced Flexibility and Choice offered in Synchrony India and will require the incumbent to be available between 06:00 AM Eastern Time – 11:30 AM Eastern Time (timings are anchored to US Eastern hours and will adjust twice a year locally). This window is for meetings with India and US teams. The remaining hours will be flexible for the employee to choose. Exceptions may apply periodically due to business needs. Please discuss this with the hiring manager for more details .) For Internal Applicants Understand the criteria or mandatory skills required for the role, before applying Inform your manager and HRM before applying for any role on Workday Ensure that your professional profile is updated (fields such as education, prior experience, other skills) and it is mandatory to upload your updated resume (Word or PDF format) Must not be any corrective action plan (First Formal/Final Formal, PIP) L10+ Employees who have completed 18 months in the organization and 12 months in current role and level are only eligible. L10+ Employees can apply Level / Grade : 12 Job Family Group Information Technology
Posted 5 days ago
10.0 years
0 Lacs
Hyderabad, Telangana, India
Remote
JOB_POSTING-3-72216-1 Job Description Role Title: VP, Data Engineering Tech Lead (L12) Company Overview COMPANY OVERVIEW: Synchrony (NYSE: SYF) is a premier consumer financial services company delivering one of the industry’s most complete digitally enabled product suites. Our experience, expertise and scale encompass a broad spectrum of industries including digital, health and wellness, retail, telecommunications, home, auto, outdoors, pet and more. We have recently been ranked #5 among India’s Best Companies to Work for 2023, #21 under LinkedIn Top Companies in India list, and received Top 25 BFSI recognition from Great Place To Work India. We have been ranked Top 5 among India’s Best Workplaces in Diversity, Equity, and Inclusion, and Top 10 among India’s Best Workplaces for Women in 2022. We offer 100% Work from Home flexibility for all our Functional employees and provide some of the best-in-class Employee Benefits and Programs catering to work-life balance and overall well-being. In addition to this, we also have Regional Engagement Hubs across India and a co-working space in Bangalore Organizational Overview Organizational Overview: This role will be part of the Data Architecture & Analytics group part of CTO organization Data team is responsible for designing and developing scalable data pipelines for efficient data ingestion, transformation, and loading(ETL). Collaborating with cross-functional teams to integrate new data sources and ensure data quality and consistency. Building and maintaining data models to facilitate data access and analysis by Data Scientists and Analysts. Responsible for the SYF public cloud platform & services. Govern health, performance, capacity, and costs of resources and ensure adherence to service levels Build well defined processes for cloud application development and service enablement. Role Summary/Purpose We are seeking a highly skilled Cloud Technical Lead with expertise in Data Engineering who will work in multi-disciplinary environments harnessing data to provide valuable impact for our clients. The Cloud Technical Lead will work closely with technology and functional teams to drive migration of legacy on-premises data systems/platforms to cloud-based solutions. The successful candidate will need to develop intimate knowledge of SYF key data domains (originations, loan activity, collection, etc.) and maintain a holistic view across SYF functions to minimize redundancies and optimize the analytics environment. Key Responsibilities Manage end-to-end project lifecycle, including planning, execution, and delivery of cloud-based data engineering projects. Providing guidance on suitable options, designing, and creating data pipeline for the analytical solutions across data lake, data warehouses and cloud implementations. Architect and design robust data pipelines and ETL processes leveraging Ab Initio and Amazon Redshift. Ensure data integration, transformation, and storage process are optimized for scalability and performance in cloud environment. Ensure data security, governance, and compliance in the cloud infrastructure. Provide leadership and guidance to data engineering teams, ensuring best practices are followed. Ensure timely delivery of high-quality solutions in an Agile environment. Required Skills/Knowledge Minimum 10+ years of experience with Bachelor's degree in Computer Science or similar technical field of study or in lieu of a degree 12+ years of relevant experience Minimum 10+ years of experience in managing large scale data platforms (Data warehouse/Data Late/Cloud) environments Minimum 10+ years of financial services experience Minimum 6+ years of experience working with Data Warehouses/Data Lake/Cloud. 6+ years’ of hards-on programming experience in ETL tools - Ab Initio or Informatica highly preferred. Be able to read and reverse engineer the logic in Ab Initio graphs. Hands on experience with cloud platforms such as S3, Redshift, Snowflake, etc. Working knowledge of Hive, Spark, Kafka and other data lake technologies. Strong familiarity with data governance, data lineage, data processes, DML, and data architecture control execution. Experience to analyze system requirements and implement migration methods for existing data. Ability to develop and maintain strong collaborative relationships at all levels across IT and the business. Excellent written and oral communication skills, along with a strong ability to lead and influence others. Experience working iteratively in a fast-paced agile environment. Demonstrated ability to drive change and work effectively across business and geographical boundaries. Expertise in evaluating technology and solution engineering, with strong focus on architecture and deployment of new technology Superior decision-making, client relationship, and vendor management skills. Desired Skills/Knowledge Prior work experience in a credit card/banking/fintech company. Experience dealing with sensitive data in a highly regulated environment. Demonstrated implementation of complex and innovative solutions. Agile experience using JIRA or similar Agile tools. Eligibility Criteria Bachelor's degree in Computer Science or similar technical field of study (Masters degree preferred) Minimum 12+ years of experience in managing large scale data platforms (Data warehouse/Data Late/Cloud) environments Minimum 12+ years of financial services experience Minimum 8+ years of experience working with Oracle Data Warehouses/Data Lake/Cloud 8+ years’ of programming experience in ETL tools - Ab Initio or Informatica highly preferred. Be able to read and reverse engineer the logic in Ab Initio graphs. Hands on experience with cloud platforms such as S3, Redshift, Snowflake, etc. Rigorous data analysis through SQL in Oracle and various Hadoop technologies. Involvement in large scale data analytics migration from on premises to a public cloud Strong familiarity with data governance, data lineage, data processes, DML, and data architecture control execution. Experience to analyze system requirements and implement migration methods for existing data. Excellent written and oral communication skills, along with a strong ability to lead and influence others. Experience working iteratively in a fast-paced agile environment. Work Timings: 3:00 PM IST to 12:00 AM IST (WORK TIMINGS: This role qualifies for Enhanced Flexibility and Choice offered in Synchrony India and will require the incumbent to be available between 06:00 AM Eastern Time – 11:30 AM Eastern Time (timings are anchored to US Eastern hours and will adjust twice a year locally). This window is for meetings with India and US teams. The remaining hours will be flexible for the employee to choose. Exceptions may apply periodically due to business needs. Please discuss this with the hiring manager for more details .) For Internal Applicants Understand the criteria or mandatory skills required for the role, before applying Inform your manager and HRM before applying for any role on Workday Ensure that your professional profile is updated (fields such as education, prior experience, other skills) and it is mandatory to upload your updated resume (Word or PDF format) Must not be any corrective action plan (First Formal/Final Formal, PIP) L10+ Employees who have completed 18 months in the organization and 12 months in current role and level are only eligible. L10+ Employees can apply Level / Grade : 12 Job Family Group Information Technology
Posted 5 days ago
10.0 years
0 Lacs
Hyderabad, Telangana, India
Remote
JOB_POSTING-3-72216 Job Description Role Title: VP, Data Engineering Tech Lead (L12) Company Overview COMPANY OVERVIEW: Synchrony (NYSE: SYF) is a premier consumer financial services company delivering one of the industry’s most complete digitally enabled product suites. Our experience, expertise and scale encompass a broad spectrum of industries including digital, health and wellness, retail, telecommunications, home, auto, outdoors, pet and more. We have recently been ranked #5 among India’s Best Companies to Work for 2023, #21 under LinkedIn Top Companies in India list, and received Top 25 BFSI recognition from Great Place To Work India. We have been ranked Top 5 among India’s Best Workplaces in Diversity, Equity, and Inclusion, and Top 10 among India’s Best Workplaces for Women in 2022. We offer 100% Work from Home flexibility for all our Functional employees and provide some of the best-in-class Employee Benefits and Programs catering to work-life balance and overall well-being. In addition to this, we also have Regional Engagement Hubs across India and a co-working space in Bangalore Organizational Overview Organizational Overview: This role will be part of the Data Architecture & Analytics group part of CTO organization Data team is responsible for designing and developing scalable data pipelines for efficient data ingestion, transformation, and loading(ETL). Collaborating with cross-functional teams to integrate new data sources and ensure data quality and consistency. Building and maintaining data models to facilitate data access and analysis by Data Scientists and Analysts. Responsible for the SYF public cloud platform & services. Govern health, performance, capacity, and costs of resources and ensure adherence to service levels Build well defined processes for cloud application development and service enablement. Role Summary/Purpose We are seeking a highly skilled Cloud Technical Lead with expertise in Data Engineering who will work in multi-disciplinary environments harnessing data to provide valuable impact for our clients. The Cloud Technical Lead will work closely with technology and functional teams to drive migration of legacy on-premises data systems/platforms to cloud-based solutions. The successful candidate will need to develop intimate knowledge of SYF key data domains (originations, loan activity, collection, etc.) and maintain a holistic view across SYF functions to minimize redundancies and optimize the analytics environment. Key Responsibilities Manage end-to-end project lifecycle, including planning, execution, and delivery of cloud-based data engineering projects. Providing guidance on suitable options, designing, and creating data pipeline for the analytical solutions across data lake, data warehouses and cloud implementations. Architect and design robust data pipelines and ETL processes leveraging Ab Initio and Amazon Redshift. Ensure data integration, transformation, and storage process are optimized for scalability and performance in cloud environment. Ensure data security, governance, and compliance in the cloud infrastructure. Provide leadership and guidance to data engineering teams, ensuring best practices are followed. Ensure timely delivery of high-quality solutions in an Agile environment. Required Skills/Knowledge Minimum 10+ years of experience with Bachelor's degree in Computer Science or similar technical field of study or in lieu of a degree 12+ years of relevant experience Minimum 10+ years of experience in managing large scale data platforms (Data warehouse/Data Late/Cloud) environments Minimum 10+ years of financial services experience Minimum 6+ years of experience working with Data Warehouses/Data Lake/Cloud. 6+ years’ of hards-on programming experience in ETL tools - Ab Initio or Informatica highly preferred. Be able to read and reverse engineer the logic in Ab Initio graphs. Hands on experience with cloud platforms such as S3, Redshift, Snowflake, etc. Working knowledge of Hive, Spark, Kafka and other data lake technologies. Strong familiarity with data governance, data lineage, data processes, DML, and data architecture control execution. Experience to analyze system requirements and implement migration methods for existing data. Ability to develop and maintain strong collaborative relationships at all levels across IT and the business. Excellent written and oral communication skills, along with a strong ability to lead and influence others. Experience working iteratively in a fast-paced agile environment. Demonstrated ability to drive change and work effectively across business and geographical boundaries. Expertise in evaluating technology and solution engineering, with strong focus on architecture and deployment of new technology Superior decision-making, client relationship, and vendor management skills. Desired Skills/Knowledge Prior work experience in a credit card/banking/fintech company. Experience dealing with sensitive data in a highly regulated environment. Demonstrated implementation of complex and innovative solutions. Agile experience using JIRA or similar Agile tools. Eligibility Criteria Bachelor's degree in Computer Science or similar technical field of study (Masters degree preferred) Minimum 12+ years of experience in managing large scale data platforms (Data warehouse/Data Late/Cloud) environments Minimum 12+ years of financial services experience Minimum 8+ years of experience working with Oracle Data Warehouses/Data Lake/Cloud 8+ years’ of programming experience in ETL tools - Ab Initio or Informatica highly preferred. Be able to read and reverse engineer the logic in Ab Initio graphs. Hands on experience with cloud platforms such as S3, Redshift, Snowflake, etc. Rigorous data analysis through SQL in Oracle and various Hadoop technologies. Involvement in large scale data analytics migration from on premises to a public cloud Strong familiarity with data governance, data lineage, data processes, DML, and data architecture control execution. Experience to analyze system requirements and implement migration methods for existing data. Excellent written and oral communication skills, along with a strong ability to lead and influence others. Experience working iteratively in a fast-paced agile environment. Work Timings: 3:00 PM IST to 12:00 AM IST (WORK TIMINGS: This role qualifies for Enhanced Flexibility and Choice offered in Synchrony India and will require the incumbent to be available between 06:00 AM Eastern Time – 11:30 AM Eastern Time (timings are anchored to US Eastern hours and will adjust twice a year locally). This window is for meetings with India and US teams. The remaining hours will be flexible for the employee to choose. Exceptions may apply periodically due to business needs. Please discuss this with the hiring manager for more details .) For Internal Applicants Understand the criteria or mandatory skills required for the role, before applying Inform your manager and HRM before applying for any role on Workday Ensure that your professional profile is updated (fields such as education, prior experience, other skills) and it is mandatory to upload your updated resume (Word or PDF format) Must not be any corrective action plan (First Formal/Final Formal, PIP) L10+ Employees who have completed 18 months in the organization and 12 months in current role and level are only eligible. L10+ Employees can apply Level / Grade : 12 Job Family Group Information Technology
Posted 5 days ago
0 years
0 Lacs
Pune, Maharashtra, India
On-site
Join us as a Release Train Manager. At Barclays, we don’t just embrace change – we drive it and help craft innovative solutions. You'll harness cutting-edge technology to revolutionise our digital offerings, ensuring unapparelled customer experiences. You will be leading a delivery of event streaming (Kafka, Pulsar, Kinesis, etc.) and operational data store (Cassandra, MongoDB, Redis, etc.) solutions, preferably in banking or financial services. To be successful as a Release Train Manager, you should have experience with: Strong experience as Release Train Manager, Scrum Master, or Agile Program Management roles. Proven experience leading delivery of event streaming (Kafka, Pulsar, Kinesis, etc.) and operational data store (Cassandra, MongoDB, Redis, etc.) solutions, preferably in banking or financial services. Strong understanding of Agile/SAFe methodologies and experience facilitating Agile ceremonies. Familiarity with event-driven architectures, real-time data processing, and operational data stores. Experience with cloud platforms (AWS, Azure, GCP) and DevOps practices. Excellent communication, facilitation, and stakeholder management skills. Proficiency with Agile tools such as JIRA, Confluence, Azure DevOps, etc. Knowledge of regulatory, compliance, and security requirements in the banking industry. Some Other Highly Valued Skills May Include SAFe Release Train Engineer (RTE) certification or equivalent Agile certifications. Experience with large-scale event streaming or operational data store programs. Exposure to data governance, data privacy, and real-time analytics frameworks. You may be assessed on the key critical skills relevant for success in role, such as risk and controls, change and transformation, business acumen strategic thinking and digital and technology, as well as job-specific technical skills. This role will be based out of Pune. Purpose of the role To manage the efficient delivery of large-scale technical projects and capabilities across the bank and collaborate with internal and external stakeholders to understand their needs and expectations throughout the software product lifecycle, adhering to agreed time, budget and quality requirements. Accountabilities Management of the delivery, resource allocation and improvement of complex technical project capabilities across the bank using Agile/Scrum methodologies for planning, stand-ups, demos, retrospectives and the execution of sprints. Collaboration with customers and business teams to manage and implement customer trails to support the proposal of technically feasible solutions, and effort and timeline expectations. Collaboration with software engineers, quality teams, product managers and other engineering teams to deliver high-quality products and features through the software project lifecycle aligned to the banks objectives, regulation and security policies. Identification, assessment, and mitigation of risks associated with technical projects to prepare the bank for potential challenges. Management of change requests and communication with stakeholders throughout the project lifecycle. Management of vendor relations involved in technical projects to ensure they deliver according to agreed terms. Stay abreast of the latest industry technology trends and technologies, to evaluate and adopt new approaches to improving deliver outcomes and to foster a culture of continuous learning, technical excellence and growth. Vice President Expectations To contribute or set strategy, drive requirements and make recommendations for change. Plan resources, budgets, and policies; manage and maintain policies/ processes; deliver continuous improvements and escalate breaches of policies/procedures.. If managing a team, they define jobs and responsibilities, planning for the department’s future needs and operations, counselling employees on performance and contributing to employee pay decisions/changes. They may also lead a number of specialists to influence the operations of a department, in alignment with strategic as well as tactical priorities, while balancing short and long term goals and ensuring that budgets and schedules meet corporate requirements.. If the position has leadership responsibilities, People Leaders are expected to demonstrate a clear set of leadership behaviours to create an environment for colleagues to thrive and deliver to a consistently excellent standard. The four LEAD behaviours are: L – Listen and be authentic, E – Energise and inspire, A – Align across the enterprise, D – Develop others.. OR for an individual contributor, they will be a subject matter expert within own discipline and will guide technical direction. They will lead collaborative, multi-year assignments and guide team members through structured assignments, identify the need for the inclusion of other areas of specialisation to complete assignments. They will train, guide and coach less experienced specialists and provide information affecting long term profits, organisational risks and strategic decisions.. Advise key stakeholders, including functional leadership teams and senior management on functional and cross functional areas of impact and alignment. Manage and mitigate risks through assessment, in support of the control and governance agenda. Demonstrate leadership and accountability for managing risk and strengthening controls in relation to the work your team does. Demonstrate comprehensive understanding of the organisation functions to contribute to achieving the goals of the business. Collaborate with other areas of work, for business aligned support areas to keep up to speed with business activity and the business strategies. Create solutions based on sophisticated analytical thought comparing and selecting complex alternatives. In-depth analysis with interpretative thinking will be required to define problems and develop innovative solutions. Adopt and include the outcomes of extensive research in problem solving processes. Seek out, build and maintain trusting relationships and partnerships with internal and external stakeholders in order to accomplish key business objectives, using influencing and negotiating skills to achieve outcomes. All colleagues will be expected to demonstrate the Barclays Values of Respect, Integrity, Service, Excellence and Stewardship – our moral compass, helping us do what we believe is right. They will also be expected to demonstrate the Barclays Mindset – to Empower, Challenge and Drive – the operating manual for how we behave. Back to nav Share job X(Opens in new tab or window) Facebook(Opens in new tab or window) LinkedIn(Opens in new tab or window)
Posted 5 days ago
5.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
ABOUT THE ROLE: We’re looking for a Senior Java Backend Developer to join our fast-paced engineering team and help build reliable, scalable, and cloud-native backend systems. In this role, you’ll design and implement modern microservices to support core business features like billing, collections, scheduling, and payment gateways. You’ll be hands-on with the latest Java and Spring frameworks, contributing across the full SDLC from concept and design to deployment and support in production. This is a great opportunity for someone who enjoys domain-driven design, event streaming, and building systems that directly impact end users at scale. WHAT YOU'LL DO: Design, build, and maintain robust, high-performance microservices using Java and Spring Boot Develop and expose RESTful APIs that support scalable platform integrations Work with Kafka and event-driven architecture for decoupled, asynchronous systems Optimize system performance through stress testing, profiling, and tuning Integrate cloud-native services and deploy to AWS environments using Docker and Kubernetes Collaborate with cross-functional teams including QA, Product, and DevOps Write clean, secure, and well-tested code following TDD and secure coding practices Troubleshoot and support production systems, performing root cause analysis Participate in design discussions, code reviews, and continuous improvement initiatives WHAT WE'RE LOOKING FOR: 5+ years of backend development experience with Java (8+) and Spring frameworks Experience with microservices architecture and REST API development Solid understanding of event-driven systems, ideally using Kafka Familiarity with scheduling, billing, or payment gateway workflows Cloud experience (preferably AWS) with deployment using Docker/Kubernetes Experience working in Agile/Scrum teams and CI/CD environments Strong problem-solving and debugging skills Knowledge of relational databases (e.g., SQL, PostgreSQL, or MySQL) GOOD TO HAVE: Knowledge of DevOps practices and infrastructure-as-code Experience with application security, such as secure code reviews and static analysis Prior work in a billing or payments platform Familiarity with API gateways, observability tooling, and container orchestration WHAT’S IN IT FOR YOU: Purpose led company with a Values focused culture – Best Life, One Team, Growth Mindset Time Off – competitive PTO plans with 15 Earned accrued leave, 12 days Sick leave, and 12 days Casual leave per year 11 Holidays plus 4 Days of Disconnect – once a quarter, we take a collective breather and enjoy a day off together around the globe. #oneteam Group Mediclaim insurance coverage of INR 500,000 for employee + spouse, 2 kids, and parents or parent-in-laws, and including EAP counseling Life Insurance and Personal Accident Insurance Best Life Perk – we are committed to meeting you wherever you are in your fitness journey with a quarterly reimbursement Premium Calm App – enjoy tranquility with a Calm App subscription for you and up to 4 dependents over the age of 16 Support for working women with financial aid towards crèche facility, ensuring a safe and nurturing environment for their little ones while they focus on their careers. We’re committed to diversity and passion, and encourage you to apply, even if you don’t demonstrate all the listed skillsets! ABC’S COMMITMENT TO DIVERSITY, EQUALITY, BELONGING AND INCLUSION: ABC is an equal opportunity employer. We celebrate diversity and are committed to creating an inclusive environment for all employees. We are intentional about creating an environment where employees, our clients and other stakeholders feel valued and inspired to reach their full potential and make authentic connections. We foster a workplace culture that embraces each person’s diversity, including the extent to which they are similar or different. ABC leaders believe that an equitable and inclusive culture is not only the right thing to do, it is a business imperative. Read more about our commitment to diversity, equality, belonging and inclusion at abcfitness.com ABOUT ABC: ABC Fitness (abcfitness.com) is the premier provider of software and related services for the fitness industry and has built a reputation for excellence in support for clubs and their members. ABC is the trusted provider to boost performance and create a total fitness experience for over 41 million members of clubs of all sizes whether a multi-location chain, franchise or an independent gym. Founded in 1981, ABC helps over 31,000 gyms and health clubs globally perform better and more profitably offering a comprehensive SaaS club management solution that enables club operators to achieve optimal performance. ABC Fitness is a Thoma Bravo portfolio company, a private equity firm focused on investing in software and technology companies (thomabravo.com).
Posted 5 days ago
0 years
0 Lacs
New Delhi, Delhi, India
On-site
Company Description Plutobyte Technologies Private Limited is a software company specializing in AI-driven solutions, custom software development, and cloud technologies. With a commitment to excellence and innovation, we empower businesses with innovative, scalable, and high-performance digital tools for seamless integration and enhanced efficiency. Role Description This is a role for TD bank (Canada) . Plutobyte is arranging resource for TD bank Exp - 10+ yrs Qualifications Strong experience with Apache Kafka and data streaming technologies Proficiency in distributed systems and data processing Experience with performance tuning and optimization Knowledge of cloud technologies and microservices architecture Excellent problem-solving and analytical skills Ability to work independently and in a team setting Bachelor's degree in Computer Science or related field Experience in AI-driven solutions and custom software development is a plus
Posted 5 days ago
10.0 - 15.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
The Applications Development Senior Manager is a senior management level position responsible for accomplishing results through the management of a team or department in an effort to establish and implement new or revised application systems and programs in coordination with the Technology team. The overall objective of this role is to drive applications systems analysis and programming activities. Responsibilities: Partner with multiple management teams to ensure appropriate integration of functions to meet goals as well as identify and define necessary system enhancements to deploy new products and process improvements. Accountable for executing and driving results on large-scale efforts or multiple smaller efforts and serving as a development lead for most medium and large projects. This includes expertise with application development methodologies and standards for program analysis, design, coding, testing, debugging and implementation. Resolve variety of high impact problems/projects through in-depth evaluation of complex business processes, system processes, and industry standard Provide expertise in area and advanced knowledge of applications programming and ensure application design adheres to the overall architecture blueprint. Utilize advanced knowledge of system flow and develop standards for coding, testing, debugging, and implementation. Develop comprehensive knowledge of how areas of business, such as architecture and infrastructure, integrate to accomplish business goals. Provide in-depth analysis with interpretive thinking to define issues and develop innovative solutions. Serve as advisor or coach to mid-level developers and analysts, allocating work as necessary. Independent work style, requiring little or no guidance by more senior developers. Assist in the planning and managing of application development assignments generally involving large budgets, cross functional projects or multiple projects. This includes effectively understanding and analyzing both technical and business risks and impact. Expected to effectively communicate those risks to the business owners, so that they can make informed decisions. Accountable for providing guidance on architecturally significant efforts during the preplanning phase, and ensuring principles and best practices are followed prior to initiation of work. In doing so, closely watch and evaluate Digital roadmaps, including impacts to support upcoming journeys. Publish design review extensions, and provide documented guidance aligned to sprint plans and timelines. Provide a long-term perspective for business and technology choices; using technical judgment to vet architecture as required. Able to direct teams on how to develop and deliver systems that are efficient with resource usage such as hardware, runtime, performance, load, and memory requirements. Accountable for providing architectural guidance to the SDE’s based on best practices and in alignment with CTO guidelines and platform. Drive clarity and work with complete independence as business and or technical strategy is not defined. Provide the corresponding architectural guidance, and conduct design reviews and code reviews based on the projects assigned to your LOB. The product definition and technical planning is out of scope. Elaborate, bring and communicate clear metrics on Design Reviews, Code Reviews, CI/CD and Design Patterns adoption. Communicate progress, anticipate bottlenecks, provide escalation management, identify, assess, track and mitigate issues/risks at multiple levels. Recognize discordant views and take part in constructive dialog to resolve them. Demonstrate the ability to implement continuous improvement and the induction of new technology. Demonstrate examples of influence in scrum teams beyond your own area of focus. Appropriately assess risk when business decisions are made, demonstrating particular consideration for the firm's reputation and safeguarding Citigroup, its clients and assets, by driving compliance with applicable laws, rules and regulations, adhering to Policy, applying sound ethical judgment regarding personal behavior, conduct and business practices, and escalating, managing and reporting control issues with transparency. Qualifications: 10-15 years of relevant experience in Apps Development or systems analysis role Experience of designing and delivering complex multi-system, distributed, highly available development projects. Proven experience of solution design and architecture in successful deliveries using technologies like Microservices, big data, java etc Experience with architectural methodologies and frameworks. Should be aware of Architectural design pattern, design principals etc Should have strong hands-on experience on Java, Spring Boot, Database, JDBC, JMS, Rest Big Data /Distributed System, etc Experience using ALM and CICD tools like Bitbucket, TFS, Jenkins, uDeploy, BMC RLM or related tools in an agile methodology. Experience in Leading project solution architecture design on various levels, including high and detailed level design. Should have strong hands-on experience on Java, Spring Boot, Database, JDBC, JMS, Rest Big Data /Distributed System, etc Should have strong hands-on experience in big data processing, ETL frameworks like Spring batch, Spring Data flow, Apache Spark, Flink, etc., Should have experience in NO-SQL database like Mongo DB / Cassandra Should have experience in distributed messaging framework like Kafka / Rabbit MQ / JMS, etc. Should have experience on AI tools like Github copilot , chatGpt and open AI frameworks. Should have exposure in application containerization such as Build / Develop / Deploy using dockers and Kubernetes Should have exposure in Distributed caching frameworks like Hazelcast / Apache Ignite, etc., Should have exposure application monitoring and tracing tools like SPLUNK, ELK etc. Good to have exposure in reporting framework like Tableau, informatica, Power BI etc. Experience with architectural methodologies and frameworks. Should be aware of Architectural design pattern, design principals etc Experience using ALM and CICD tools like Bitbucket, TFS, Jenkins, uDeploy, BMC RLM or related tools in an agile methodology. Experience in Leading project solution architecture design on various levels including high and detailed level design. Experience in Solution Architecture documentation and other architectural deliverables. Experience in Solution Architecture documentation and other architectural deliverables. Experience in providing solutions for a wide range of technologies – Java, Databae, Cloud and, Web Technologies, etc. Experience in working in both waterfall and Agile methodologies. Ability to adjust priorities quickly as circumstances dictate. Demonstrated leadership and project management skills. Consistently demonstrates clear and concise written and verbal communication. Experience within a banking/finance environment would be preferable. Education: Bachelor’s degree/University degree or equivalent experience Master’s degree preferred This job description provides a high-level review of the types of work performed. Other job-related duties may be assigned as required. ------------------------------------------------------ Job Family Group: Technology ------------------------------------------------------ Job Family: Applications Development ------------------------------------------------------ Time Type: Full time ------------------------------------------------------ Most Relevant Skills Please see the requirements listed above. ------------------------------------------------------ Other Relevant Skills For complementary skills, please see above and/or contact the recruiter. ------------------------------------------------------ Citi is an equal opportunity employer, and qualified candidates will receive consideration without regard to their race, color, religion, sex, sexual orientation, gender identity, national origin, disability, status as a protected veteran, or any other characteristic protected by law. If you are a person with a disability and need a reasonable accommodation to use our search tools and/or apply for a career opportunity review Accessibility at Citi. View Citi’s EEO Policy Statement and the Know Your Rights poster.
Posted 5 days ago
13.0 years
0 Lacs
Gurugram, Haryana, India
On-site
We are seeking an experienced Cloud AIOps Architect to lead the design and implementation of advanced AI-driven operational systems across multi-cloud and hybrid cloud environments. This role demands a blend of technical expertise, innovation, and leadership to develop scalable solutions for complex IT systems with a focus on automation, machine learning, and operational efficiency. Responsibilities Architect and design the AIOps solution leveraging AWS, Azure, and Cloud Agnostic services, ensuring portability and scalability Develop an end-to-end automated machine learning (ML) pipeline from data ingestion, DataOps, model training, to inference pipelines across multi-cloud environments Design hybrid architectures leveraging cloud-native services like Amazon SageMaker, Azure Machine Learning, and Kubernetes for development, model deployment, and orchestration Design and implement ChatOps integration, allowing users to interface with the platform through Slack, Microsoft Teams, or similar communication platforms Leverage Jupyter Notebooks in AWS SageMaker, Azure Machine Learning Studio, or cloud-agnostic environments to create model prototypes and experiment with datasets Lead the design of classification models and other ML models using AWS SageMaker training jobs, Azure ML training jobs, or open-source tools in a Kubernetes container Implement automated rule management systems using Python in containers deployed to AWS ECS/EKS, Azure AKS, or Kubernetes for cloud-agnostic solutions Architect the integration of ChatOps backend services using Python containers running in AWS ECS/EKS, Azure AKS, or Kubernetes for real-time interactions and updates Oversee the continuous deployment and retraining of models based on updated data and feedback loops, ensuring models remain efficient and adaptive Design platform-agnostic solutions to ensure that the system can be ported across different cloud environments or run in hybrid clouds (on-premises and cloud) Requirements 13+ years of overall experience and 7+ years of experience in AIOps, Cloud Architecture, or DevOps roles Hands-on experience with AWS services such as SageMaker, S3, Glue, Kinesis, ECS, EKS Strong experience with Azure services such as Azure Machine Learning, Blob Storage, Azure Event Hubs, Azure AKS Hands-on experience working on the design, development, and deployment of contact centre solutions at scale Proficiency in container orchestration (e.g., Kubernetes) and experience with multi-cloud environments Experience with machine learning model training, deployment, and data management across cloud-native and cloud-agnostic environments Expertise in implementing ChatOps solutions using platforms like Microsoft Teams, Slack, and integrating them with AIOps automation Familiarity with data lake architectures, data pipelines, and inference pipelines using event-driven architectures Strong programming skills in Python for rule management, automation, and integration with cloud services Experience in Kafka, Azure DevOps, and AWS DevOps for CI/CD pipelines
Posted 5 days ago
3.0 - 4.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
The Role We are looking for entry level full-stack software engineers who are passionate about solving business problems through innovation and engineering practices. This role will be responsible for writing code, pairing with other developers as appropriate, decomposing acceptance criteria to understand team backlog deliverables, complexities, and risk, while working as a strong contributor on an agile team. From a technical standpoint, the Software Engineer has full-stack coding and implementation responsibilities and adheres to best practice principles, including modern cloud-based software development, agile and scrum, code quality, and tool usage. The Software Engineer works to understand and influence software architecture, while contributing to Citi’s and GFT’s technical user base. Responsibilities Apply depth of knowledge and expertise to all aspects of the software development lifecycle, as well as partner continuously with stakeholders on a regular basis Develop and engineer solutions within an Agile software delivery team, working to collaboratively deliver sprint goals, write code, and participate in the broader Citi technical community and team-level Agile and Scrum processes. Contribute to the design, documentation, and development of world-class enterprise applications leveraging the latest technologies and software design patterns. Leverage technical knowledge of concepts and procedures within own area and basic knowledge of other areas to resolve issues, as necessary. Follow and contribute to defining technical and team standards. Collaborate technical leadership to achieve established goals, in line with our broader technical strategy. Required Qualifications 3 to 4 years of experience as a Software Engineer/Developer using Java Multiple years of experience with software engineering best practices (unit testing, automation, design patterns, peer review, etc.). Clear understanding of Data Structures and Object Oriented Principles using Java Multiple years of experience on Service Oriented and MicroServices architectures, including REST and GraphQL implementations. Exposure to front-end technologies (Angular, Javascript, Typescript) Exposure to Cloud-native development and Container Orchestration tools (Serverless, Docker, Kubernetes, OpenShift, etc.) Multiple years of experience with frameworks like Spring Boot, Quarkus, Micronaut, or Vert.x Exposure to Continuous Integration and Continuous Delivery (CI/CD) pipelines, either on-premise or public cloud (i.e., Tekton, Harness, CircleCI, Cloudbees Jenkins, etc.). Multiple years of experience with agile and iterative software delivery (SCRUM, Kanban) Exposure to Database technologies (RDBMS, NoSQL, Oracle, MySQL, Mongo) Exposure to event-driven design and architecture (i.e., Kafka, Spark Flink, RabbitMQ, etc.) B.Tech/B.Engg degree or equivalent work experience Preferred Qualifications Exposure to architecture experience in building horizontally scalable, highly available, highly resilient, and low latency applications. Exposure to API Management tools Exposure to Infrastructure as Code tools (i.e., Terraform, Cloudformation, etc.) Exposure to Security, Observability, and Monitoring (i.e., Grafana Prometheus, Splunk, ELK, CloudWatch, etc.) Experience mentoring junior developers ------------------------------------------------------ Job Family Group: Technology ------------------------------------------------------ Job Family: Digital Software Engineering ------------------------------------------------------ Time Type: Full time ------------------------------------------------------ Most Relevant Skills Please see the requirements listed above. ------------------------------------------------------ Other Relevant Skills For complementary skills, please see above and/or contact the recruiter. ------------------------------------------------------ Citi is an equal opportunity employer, and qualified candidates will receive consideration without regard to their race, color, religion, sex, sexual orientation, gender identity, national origin, disability, status as a protected veteran, or any other characteristic protected by law. If you are a person with a disability and need a reasonable accommodation to use our search tools and/or apply for a career opportunity review Accessibility at Citi. View Citi’s EEO Policy Statement and the Know Your Rights poster.
Posted 5 days ago
10.0 - 16.0 years
35 - 50 Lacs
Bengaluru
Hybrid
Be part of something revolutionary At o9 Solutions, our mission is clear: be the Most Valuable Platform (MVP) for enterprises. With our AI-driven platform the o9 Digital Brain we integrate global enterprises siloed planning capabilities, helping them capture millions and, in some cases, billions of dollars in value leakage. But our impact doesnt stop there. Businesses that plan better and faster also reduce waste, which drives better outcomes for the planet, too. Role: Software Development Test Architect, QA - WebAPI Mandatory skill: API Architect - Backend testing experience, ResT APIs, Microservices, Architecture (Kubernetes), CI/CD, Performance testing/scaling of services, Database testing (Relational - SQL, Post Gress, NoSQL - Mongo, Redis, Spark) About the role... As a QA Architect with 10-16 years of experience, you will be responsible for designing, enhancing and implementing effective test automation frameworks for microservices and API testing. Your role will include performance testing and tuning on Docker containers to ensure the scalability and reliability of our applications. Additionally, you will be instrumental in integrating automated testing into the CI/CD pipeline using Jenkins and Azure DevOps within the .Net environment. Your expertise in containerization orchestration will be pivotal in improving our software deployment process. Your Responsibilities will include but not limited to : Test Automation Framework and Automation: Design, develop, and maintain scalable and maintainable test automation frameworks for microservices and API testing using tools like Robot. Implement best practices and standards for test automation to achieve high test coverage and minimize test maintenance efforts. Create and execute automated test scripts and ensure they are integrated into the CI/CD pipeline. Microservices and API Automation: Develop and execute test plans for API endpoints and microservices to validate functionality, performance, and data exchange. Design test cases to verify communication between different microservices in a distributed architecture. Performance Testing and Tuning on Docker Containers: Identify performance bottlenecks in Docker containers and devise strategies for performance tuning and scalability. Conduct load and stress testing to ensure applications can handle peak workloads without compromising performance and stability. CI/CD Integration: Integrate automated tests into the CI/CD pipeline using Jenkins or Azure DevOps to achieve continuous testing and deployment. Work with development and DevOps teams to optimize the CI/CD workflow and automate the release process. Containerization and Orchestration: Utilize containerization technologies such as Docker to enhance application portability and consistency. Work with container orchestration tools (e.g., Kubernetes) to streamline application deployment, scaling, and monitoring. Quality Assurance Strategy: Collaborate with development, testing, and DevOps teams to define and implement a comprehensive quality assurance strategy. Provide guidance and mentorship to QA team members on test automation best practices and testing methodologies. Characteristics: You thrive in a fast-paced, challenging environment, and enjoy working within diverse corporate cultures and organizational structures. We highly value team spirit: Transparency and frequent communication are key. At o9, this is not limited by hierarchy, distance, or function What well do for you: Flat organization: With a very strong entrepreneurial culture (and no corporate politics). Great people and unlimited fun at work. Possibility to really make a difference in a scale-up environment. Support network: Work with a team you can learn from every day. Diversity: We pride ourselves on our international working environment. Aim10X: https://o9solutions.com/events/aim10x-digital?utm_source=o9er&utm_medium=personal&utm_campaign=KusumaManjunath AI is firmly on every CEO's agenda, o9 @ Davos & Reflections: https://o9solutions.com/articles/why-ai-is-topping-the-ceo-agenda/ Work-Life Balance: https://youtu.be/IHSZeUPATBA?feature=shared Feel part of A team: https://youtu.be/QbjtgaCyhes?feature=shared How the process works... Setup & Create your profile in Workday to track the status - Link to apply (Job ID: JR101556 or JR101577) Well contact you either via video call or phone call - whatever you prefer, with the further schedule status. During the interview phase, you will meet with the technical panels for 60 minutes. We will contact you after the interview to let you know if we’d like to progress your application. There will be 2 rounds of technical discussion followed by a Managerial discussion. We will let you know if you’re the successful candidate. Good Luck!
Posted 5 days ago
2.0 years
0 Lacs
Mumbai, Maharashtra, India
On-site
Should be a team player with a keen eye for detail and problem-solving skills. Primary responsibility is to build programs as per the goal including implicit needs with required efficiency. Responsibilities Looking for an individual contributor with hands-on development experience. Writing and implementing programs that work as per specifications Ensure they maintain a good balance between efficiency and readability. Unit testing and evaluating new programs. Identifying areas for modification in existing programs and subsequently developing the modifications/changes. Developing quality assurance procedures. Troubleshoot, debug and upgrade existing software. Qualification Bachelor’s degree in computer science or higher. Required Skills 2+ years of working experience in the software industry. Good hands-on experience of Java, Spring boot, Security & UI. Knowledge of coding languages (e.g. Java, JavaScript, Golang) and frameworks/ systems (e.g. Git/SVN). Experience with Relational databases or ORM frameworks like hibernate, kafka, redis. Experience working with container technology. The ability to learn new technologies quickly. An analytical mind – should enjoy problem solving Good communication skills – both written and verbal. Attention to detail and desire to probe further into data
Posted 5 days ago
0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Job Description YOUR IMPACT Are you passionate about developing mission-critical, high quality software solutions, using cutting-edge technology, in a dynamic environment? OUR IMPACT We are Compliance Engineering, a global team of more than 300 engineers and scientists who work on the most complex, mission-critical problems. We build and operate a suite of platforms and applications that prevent, detect, and mitigate regulatory and reputational risk across the firm. have access to the latest technology and to massive amounts of structured and unstructured data. leverage modern frameworks to build responsive and intuitive UX/UI and Big Data applications. Compliance Engi neering is looking to fill several big data software engineering roles Your first deliverable and success criteria will be the deployment, in 2025, of new complex data pipelines and surveillance models to detect inappropriate trading activity. How You Will Fulfill Your Potential As a member of our team, you will: partner globally with sponsors, users and engineering colleagues across multiple divisions to create end-to-end solutions, learn from experts, leverage various technologies including; Java, Spark, Hadoop, Flink, MapReduce, HBase, JSON, Protobuf, Presto, Elastic Search, Kafka, Kubernetes be able to innovate and incubate new ideas, have an opportunity to work on a broad range of problems, including negotiating data contracts, capturing data quality metrics, processing large scale data, building surveillance detection models, be involved in the full life cycle; defining, designing, implementing, testing, deploying, and maintaining software systems across our products. Qualifications A successful candidate will possess the following attributes: A Bachelor's or Master's degree in Computer Science, Computer Engineering, or a similar field of study. Expertise in java, as well as proficiency with databases and data manipulation. Experience in end-to-end solutions, automated testing and SDLC concepts. The ability (and tenacity) to clearly express ideas and arguments in meetings and on paper. Experience in the some of following is desired and can set you apart from other candidates : developing in large-scale systems, such as MapReduce on Hadoop/Hbase, data analysis using tools such as SQL, Spark SQL, Zeppelin/Jupyter, API design, such as to create interconnected services, knowledge of the financial industry and compliance or risk functions, ability to influence stakeholders. About Goldman Sachs Goldman Sachs is a leading global investment banking, securities and investment management firm that provides a wide range of financial services to a substantial and diversified client base that includes corporations, financial institutions, governments and individuals. Founded in 1869, the firm is headquartered in New York and maintains offices in all major financial centers around the world.
Posted 5 days ago
4.0 years
0 Lacs
Gurgaon, Haryana, India
Remote
About This Role Job Description About When BlackRock started in 1988, its founders envisioned a company that combined the best of financial services with cutting edge technology. They imagined a business that would provide financial services to clients as well as technology services to other financial firms. The result of their vision is Aladdin, our industry leading, end-to-end investment management platform. With assets valued over USD $10 trillion managed on Aladdin, our technology empowers millions of investors to save for retirement, pay for college, buy a home and improve their financial wellbeing. Are you interested in building innovative technology that crafts the financial markets? Do you like working at the speed of a startup, and solving some of the world’s most exciting challenges? Do you want to work with, and learn from, hands-on leaders in technology and finance? At BlackRock, we are looking for Software Engineers who like to innovate and solve sophisticated problems. We recognize that strength comes from diversity, and will embrace your outstanding skills, curiosity, and passion while giving you the opportunity to grow technically and as an individual. We invest and protect over $9 trillion (USD) of assets and have an extraordinary responsibility to our clients all over the world. Our technology empowers millions of investors to save for retirement, pay for college, buy a home, and improve their financial well-being. Being a technologist at BlackRock means you get the best of both worlds: working for one of the most sophisticated financial companies and being part of a software development team responsible for next generation technology and solutions. What are Aladdin and Aladdin Engineering? You will be working on BlackRock's investment operating system called Aladdin. Aladdin is used both internally within BlackRock and externally by many financial institutions. Aladdin combines sophisticated risk analytics with comprehensive portfolio management, trading, and operations tools on a single platform to power informed decision-making and create a connective tissue for thousands of users investing worldwide. Our development teams reside inside the Aladdin Engineering group. We collaboratively build the next generation of technology that changes the way information, people, and technology intersect for global investment firms. We build and package tools that manage trillions in assets and supports millions of financial instruments. We perform risk calculations and process millions of transactions for thousands of users every day worldwide! Being a Member Of Aladdin Engineering, You Will Be Tenacious: Work in a fast paced and highly complex environment Creative thinker: Analyses multiple solutions and deploy technologies in a flexible way. Great teammate: Think and work collaboratively and communicate effectively. Fast learner: Pick up new concepts and apply them quickly. Responsibilities Include Collaborate with team members in a multi-office, multi-country environment. Deliver high efficiency, high availability, concurrent and fault tolerant software systems. Significantly contribute to development of Aladdin’s global, multi-asset trading platform. Work with product management and business users to define the roadmap for the product. Design and develop innovative solutions to complex problems, identifying issues and roadblocks. Apply validated quality software engineering practices through all phases of development. Ensure resilience and stability through quality code reviews, unit, regression and user acceptance testing, dev ops and level two production support. Be a leader with vision and a partner in brainstorming solutions for team productivity, efficiency, guiding and motivating others. Drive a strong culture by bringing principles of inclusion and diversity to the team and setting the tone through specific recruiting, management actions and employee engagement. Qualifications B.E./ B.TECH./ MCA or any other relevant engineering degree from a reputed university. Skills And Experience 4 + years of experience A proven foundation in core Java and related technologies, with OO skills and design patterns. Track record building high quality software with design-focused and test-driven approaches. Hands-on experience in Java/ Spring Framework/Sprint Boot/Hibernate In depth understanding of concurrent programming and experience in designing high throughput, high availability, fault tolerant distributed applications. Prior experience in message brokers Understanding of relational databases is a must. Demonstrable experience building modern software using engineering tools such as git, maven, unit testing and integration testing tools, mocking frameworks. Strong analytical and software architecture design skills with an emphasis on test driven development Great analytical, problem-solving and communication skills Some experience or a real interest in finance, investment processes, and/or an ability to translate business problems into technical solutions. Nice To Have And Opportunities To Learn Expertise in building distributed applications using SQL and/or NOSQL technologies like MS SQL, Sybase, Cassandra or Redis A real-world practitioner of applying cloud-native design patterns to event-driven microservice architectures. Exposure to high scale distributed technology like Kafka, Mongo, Ignite, Redis Exposure to building microservices and APIs ideally with REST, Kafka or gRPC Experience working in an agile development team or on open-source development projects. Experience with optimization, algorithms or related quantitative processes. Experience with Cloud platforms like Microsoft Azure, AWS, Google Cloud Experience with cloud deployment technology (Docker, Ansible, Terraform, etc.) is also a plus. Experience with DevOps and tools like Azure DevOps Experience with AI-related projects/products or experience working in an AI research environment. Knowledge of modern front-end frameworks such as React, Vue.js or Angular is a plus. Exposure to Docker, Kubernetes, and cloud services is beneficial. A degree, certifications or opensource track record that shows you have a mastery of software engineering principles. Our Benefits To help you stay energized, engaged and inspired, we offer a wide range of benefits including a strong retirement plan, tuition reimbursement, comprehensive healthcare, support for working parents and Flexible Time Off (FTO) so you can relax, recharge and be there for the people you care about. Our hybrid work model BlackRock’s hybrid work model is designed to enable a culture of collaboration and apprenticeship that enriches the experience of our employees, while supporting flexibility for all. Employees are currently required to work at least 4 days in the office per week, with the flexibility to work from home 1 day a week. Some business groups may require more time in the office due to their roles and responsibilities. We remain focused on increasing the impactful moments that arise when we work together in person – aligned with our commitment to performance and innovation. As a new joiner, you can count on this hybrid model to accelerate your learning and onboarding experience here at BlackRock. About BlackRock At BlackRock, we are all connected by one mission: to help more and more people experience financial well-being. Our clients, and the people they serve, are saving for retirement, paying for their children’s educations, buying homes and starting businesses. Their investments also help to strengthen the global economy: support businesses small and large; finance infrastructure projects that connect and power cities; and facilitate innovations that drive progress. This mission would not be possible without our smartest investment – the one we make in our employees. It’s why we’re dedicated to creating an environment where our colleagues feel welcomed, valued and supported with networks, benefits and development opportunities to help them thrive. For additional information on BlackRock, please visit @blackrock | Twitter: @blackrock | LinkedIn: www.linkedin.com/company/blackrock BlackRock is proud to be an Equal Opportunity Employer. We evaluate qualified applicants without regard to age, disability, family status, gender identity, race, religion, sex, sexual orientation and other protected attributes at law.
Posted 5 days ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
27294 Jobs | Dublin
Wipro
13935 Jobs | Bengaluru
EY
9574 Jobs | London
Accenture in India
8669 Jobs | Dublin 2
Amazon
7820 Jobs | Seattle,WA
Uplers
7606 Jobs | Ahmedabad
IBM
7142 Jobs | Armonk
Oracle
6920 Jobs | Redwood City
Muthoot FinCorp (MFL)
6164 Jobs | New Delhi
Capgemini
5313 Jobs | Paris,France