Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
5.0 - 10.0 years
20 - 35 Lacs
Hyderabad
Work from Office
Key Responsibilities: Design, implement, and maintain AWS infrastructure ensuring scalability and high availability utilizing infrastructure as code (IaC). Manage and optimize Windows Server environments, focusing on security and reliability. Collaborate with development and operations teams to automate and streamline processes. Monitor system performance and resolve issues to prevent outages. Participate in an on-call rotation to address urgent issues and maintain system integrity. Develop and maintain documentation for system configuration and procedures. Develop and implement automation scripts and tools to streamline deployment activities. Required Qualifications: Minimum of five years of experience in Cloud/SRE/DevOps or a related field. Proven experience with AWS services including EC2, VPC, S3, RDS, and others. Strong proficiency in managing Windows Server and Linux environments. Experience with AWS IAM and security protocols. Familiarity with tools like Terraform, PowerShell, and Docker for automation. Proficiency in writing comprehensive technical documentation. Nice to Have: Expertise with Microsoft Entra ID (Azure AD) and AWS IAM. Knowledge of Windows Server Remote Desktop Services on AWS. Experience using SAML for authentication in Windows Domains. Familiarity with RDS databases (Oracle and MS SQL), especially conversion to AWS RDS. Experience in Identity and Access Management (IAM) across organizations and applications.
Posted 2 weeks ago
4.0 - 9.0 years
0 - 1 Lacs
Hyderabad, Bengaluru
Hybrid
AWS Developers + Java (Pref) (or Python ) + AWS CDK (Pref) We don't need AWS Infra If we don't have AWS CDK pool availability, we can look for S3 or Lambda. Min 1-2 yrs in AWS and overall 4+ yrs of exp.
Posted 3 weeks ago
4.0 - 8.0 years
15 - 27 Lacs
Bengaluru
Hybrid
Machine Learning & Data Pipelines Strong understanding of Machine Learning principles, lifecycle, and deployment practices Experience in designing and building ML pipelines Knowledge of deploying ML models on AWS Lambda, EKS, or other relevant services Working knowledge of Apache Airflow for orchestration of data workflows Proficiency in Python for scripting, automation, and ML model development with Data Scientists Basic understanding of SQL for querying and data analysis Cloud and DevOps Experience Hands-on experience with AWS services, including but not limited to: AWS Glue, Lambda, S3, SQS, SNS Proficient in checking and interpreting CloudWatch logs and setting up alarm. Infrastructure as Code (IaC) experience using Terraform Experience with CI/CD pipelines, particularly using GitLab for code and infrastructure deployments Understanding of cloud cost optimization and budgeting, with the ability to assess cost implications of various AWS services
Posted 3 weeks ago
8.0 - 12.0 years
25 - 40 Lacs
Chennai
Work from Office
We are seeking a highly skilled Data Architect to design and implement robust, scalable, and secure data solutions on AWS Cloud. The ideal candidate should have expertise in AWS services, data modeling, ETL processes, and big data technologies, with hands-on experience in Glue, DMS, Python, PySpark, and MPP databases like Snowflake, Redshift, or Databricks. Key Responsibilities: Architect and implement data solutions leveraging AWS services such as EC2, S3, IAM, Glue (Mandatory), and DMS for efficient data processing and storage. Develop scalable ETL pipelines using AWS Glue, Lambda, and PySpark to support data transformation, ingestion, and migration. Design and optimize data models following Medallion architecture, Data Mesh, and Enterprise Data Warehouse (EDW) principles. Implement data governance, security, and compliance best practices using IAM policies, encryption, and data masking. Work with MPP databases such as Snowflake, Redshift, or Databricks, ensuring performance tuning, indexing, and query optimization. Collaborate with cross-functional teams, including data engineers, analysts, and business stakeholders, to design efficient data integration strategies. Ensure high availability and reliability of data solutions by implementing monitoring, logging, and automation in AWS. Evaluate and recommend best practices for ETL workflows, data pipelines, and cloud-based data warehousing solutions. Troubleshoot performance bottlenecks and optimize query execution plans, indexing strategies, and data partitioning. Job Requirement Required Qualifications & Skills: Strong expertise in AWS Cloud Services: Compute (EC2), Storage (S3), and security (IAM). Proficiency in programming languages: Python, PySpark, and AWS Lambda. Mandatory experience in ETL tools: AWS Glue and DMS for data migration and transformation. Expertise in MPP databases: Snowflake, Redshift, or Databricks; knowledge of RDBMS (Oracle, SQL Server) is a plus. Deep understanding of data modeling techniques: Medallion architecture, Data Mesh, EDW principles. Experience in designing and implementing large-scale, high-performance data solutions. Strong analytical and problem-solving skills, with the ability to optimize data pipelines and storage solutions. Excellent communication and collaboration skills, with experience working in agile environments. Preferred Qualifications: AWS Certification (AWS Certified Data Analytics, AWS Certified Solutions Architect, or equivalent). Experience with real-time data streaming (Kafka, Kinesis, or similar). Familiarity with Infrastructure as Code (Terraform, CloudFormation). Understanding of data governance frameworks and compliance standards (GDPR, HIPAA, etc.
Posted 3 weeks ago
10.0 - 12.0 years
30 - 37 Lacs
Bengaluru
Work from Office
We need immediate joiners or those who are serving notice period and can join in another 10-15 days. No other candidate i.e. who are on bench or official 3, 2 months NP. Strong working experience in design and development of RESTful APIs using Java, Spring Boot and Spring Cloud. Technical hands-on experience to support development, automated testing, infrastructure and operations Fluency with relational databases or alternatively NoSQL databases Excellent pull request review skills and attention to detail Experience with streaming platforms (real-time data at massive scale like Confluent Kafka). Working experience in AWS services like EC2, ECS, RDS, S3 etc. Understanding of DevOps as well as experience with CI/CD pipelines Industry experience in Retail domain is a plus. Exposure to Agile Methodology and project tools: Jira, Confluence, SharePoint. Working knowledge in Docker Container/Kubernetes Excellent team player, ability to work independently and as part of a team Experience in mentoring junior developers and providing technical leadership Familiarity with Monitoring & Reporting tools (Prometheus, Grafana, PagerDuty etc). Ability to learn, understand, and work quickly with new emerging technologies, methodologies, and solutions in the Cloud/IT technology space Knowledge of front-end framework using React or Angular and any other programming languages like JavaScript/TypeScript or Python is a plus
Posted 3 weeks ago
10.0 - 15.0 years
12 - 17 Lacs
Hyderabad
Work from Office
About the Role: Grade Level (for internal use): 11 The Team : We are looking for a highly motivated, enthusiastic, and skilled engineering lead for Commodity Insights. We strive to deliver solutions that are sector-specific, data-rich, and hyper-targeted for evolving business needs. Our Software development Leaders are involved in the full product life cycle, from design through release. The resource would be joining a strong innovative team working on the content management platforms which support a large revenue stream for S&P Commodity Insights. Working very closely with the Product owner and Development Manager, teams are responsible for the development of user enhancements and maintaining good technical hygiene. The successful candidate will assist in the design, development, release and support of content platforms. Skills required include ReactJS, Spring Boot, RESTful microservices, AWS services (S3, ECS, Fargate, Lambda, etc.), CSS HTML, AJAX JSON, XML and SQL (PostgreSQL/Oracle), . The candidate should be aware of GEN AI or LLM models like Open AI and Claude etc. The candidate should be enthusiast in working on prompt building related to GenAI and business-related prompts. The candidate should be able to develop and optimize prompts for AI models to improve accuracy and relevance. The candidate must be able to work well with a distributed team, demonstrate an ability to articulate technical solutions for business requirements, have experience with content management/packaging solutions, and embrace a collaborative approach for the implementation of solutions. Responsibilities : Lead and mentor a team through all phases of the software development lifecycle, adhering to agile methodologies (Analyze, design, develop, test, debug, and deploy). Ensure high-quality deliverables and foster a collaborative environment. Be proficient with the use of developer tools supporting the CI/CD process including configuring and executing automated pipelines to build and deploy software components Actively contribute to team planning and ceremonies and commit to team agreement and goals Ensure code quality and security by understanding vulnerability patterns, running code scans, and be able to remediate issues. Mentoringthe junior developers. Make sure that code review tasks on all user storiesare added and timely completed. Perform reviews and integration testing to assure quality of project development eorts Design database schemas, conceptual data models, UI workows and application architectures that t into the enterprise architecture Support the user base, assisting with tracking down issues and analyzing feedback to identify product improvements Understand and commit to the culture of S&P Global: the vision, purpose and values of the organization Basic Qualifications : 10+ years experience in an agile team development role, delivering software solutions using Scrum Java, J2EE, Javascript, CSS/HTML, AJAX ReactJS, Spring Boot, Microservices, RESTful services, OAuth XML, JSON, data transformation SQL and NoSQL Databases (Oracle, PostgreSQL) Working knowledge of Amazon Web Services (Lambda, Fargate, ECS, S3, etc.) Experience on GEN AI or LLM models like Open AI and Claude is preferred. Experience with agile workflow tools (e.g. VSTS, JIRA) Experience with source code management tools (e.g. git), build management tools (e.g. Maven) and continuous integration/delivery processes and tools (e.g. Jenkins, Ansible) Self-starter able to work to achieve objectives with minimum direction Comfortable working independently as well as in a team Excellent verbal and written communication skills Preferred Qualifications: Analysis of business information patterns, data analysis and data modeling Working with user experience designers to deliver end-user focused benefits realization Familiar with containerization (Docker, Kubernetes) Messaging/queuing solutions (Kafka, etc.) Familiar with application security development/operations best practices (including static/dynamic code analysis tools) About S&P Global Commodity Insights At S&P Global Commodity Insights, our complete view of global energy and commodities markets enables our customers to make decisions with conviction and create long-term, sustainable value. Were a trusted connector that brings together thought leaders, market participants, governments, and regulators to co-create solutions that lead to progress. Vital to navigating Energy Transition, S&P Global Commodity Insights coverage includes oil and gas, power, chemicals, metals, agriculture and shipping. S&P Global Commodity Insights is a division of S&P Global (NYSE: SPGI). S&P Global is the worlds foremost provider of credit ratings, benchmarks, analytics and workflow solutions in the global capital, commodity and automotive markets. With every one of our offerings, we help many of the worlds leading organizations navigate the economic landscape so they can plan for tomorrow, today. For more information, visit . Our world is in transition and getting more complex by the day. We push past expected observations and seek out new levels of understanding so that we can help companies, governments and individuals make an impact on tomorrow. At S&P Global we transform data into Essential Intelligence, pinpointing risks and opening possibilities. We Accelerate Progress. At S&P Global, we focus on Powering Global Markets. Throughout our history, the world's leading organizations have relied on us for the Essential Intelligence they need to make confident decisions about the road ahead. We start with a foundation of integrity in all we do, bring a spirit of discovery to our work, and collaborate in close partnership with each other and our customers to achieve shared goals. We take care of you, so you cantake care of business. We care about our people. Thats why we provide everything youand your careerneed to thrive at S&P Global. Health & Wellness: Health care coverage designed for the mind and body. Family Friendly Perks: Its not just about you. S&P Global has perks for your partners and little ones, too, with some best-in class benefits for families. Global Hiring and Opportunity at S&P Global: At S&P Global, we are committed to fostering a connected andengaged workplace where all individuals have access to opportunities based on their skills, experience, and contributions. Our hiring practices emphasize fairness, transparency, and merit, ensuring that we attract and retain top talent. By valuing different perspectives and promoting a culture of respect and collaboration, we drive innovation and power global markets. ----------------------------------------------------------- S&P Global is an equal opportunity employer and all qualified candidates will receive consideration for employment without regard to race/ethnicity, color, religion, sex, sexual orientation, gender identity, national origin, age, disability, marital status, military veteran status, unemployment status, or any other status protected by law. Only electronic job submissions will be considered for employment. If you need an accommodation during the application process due to a disability, please send an email to: and your request will be forwarded to the appropriate person. US Candidates Only: The EEO is the Law Poster describes discrimination protections under federal law. Pay Transparency Nondiscrimination Provision - ----------------------------------------------------------- 20 - Professional (EEO-2 Job Categories-United States of America), IFTECH202.2 - Middle Professional Tier II (EEO Job Group), SWP Priority Ratings - (Strategic Workforce Planning)
Posted 3 weeks ago
4.0 - 7.0 years
5 - 16 Lacs
Hyderabad, Bengaluru
Work from Office
Roles and Responsibilities : Design, develop, test, deploy and maintain Snowflake data warehouses for clients. Collaborate with cross-functional teams to gather requirements and deliver high-quality solutions. Develop ETL processes using Python scripts to extract data from various sources and load it into Snowflake tables. Troubleshoot issues related to Snowflake performance tuning, query optimization, and data quality. Job Requirements : 4-7 years of experience in developing large-scale data warehouses on AWS using Snowflake. Strong understanding of Lambda expressions in Snowflake SQL language. Experience with Python programming language for ETL development.
Posted 3 weeks ago
9.0 - 14.0 years
15 - 30 Lacs
Bengaluru
Work from Office
Qualifications/Skill Sets: Experience 8+ years of experience in software engineering with at least 3+ years as a Staff Engineer or Technical Lead level. Architecture Expertise: Proven track record designing and building large-scale, multi-tenant SaaS applications on cloud platforms (e.g., AWS, Azure, GCP). Tech Stack: Expertise in modern backend languages (e.g., Java, Python, Go, Node.js), frontend frameworks (e.g., React, Angular), and database systems (e.g., PostgreSQL, MySQL, NoSQL). Cloud & Infrastructure: Strong knowledge of containerization (Docker, Kubernetes), serverless architectures, CI/CD pipelines, and infrastructure-as-code (e.g., Terraform, CloudFormation). End to end development and deployment experience in cloud applications Distributed Systems: Deep understanding of event-driven architecture, message queues (e.g., Kafka, RabbitMQ), and microservices. Security: Strong focus on secure coding practices and familiarity with identity management (OAuth2, SAML) and data encryption. Communication: Excellent verbal and written communication skills with the ability to present complex technical ideas to stakeholders. Problem Solving: Strong analytical mindset and a proactive approach to identifying and solving system bottlenecks.
Posted 3 weeks ago
4.0 - 6.0 years
6 - 8 Lacs
Hyderabad
Work from Office
What you will do In this vital role you will be responsible for designing, building, and maintaining scalable, secure, and reliable AWS cloud infrastructure. This is a hands-on engineering role requiring deep expertise in Infrastructure as Code (IaC), automation, cloud networking, and security . The ideal candidate should have strong AWS knowledge and be capable of writing and maintaining Terraform, CloudFormation, and CI/CD pipelines to streamline cloud deployments. Please note, this is an onsite role based in Hyderabad. Roles & Responsibilities: AWS Infrastructure Design & Implementation Architect, implement, and manage highly available AWS cloud environments . Design VPCs, Subnets, Security Groups, and IAM policies to enforce security standard processes. Optimize AWS costs using reserved instances, savings plans, and auto-scaling . Infrastructure as Code (IaC) & Automation Develop, maintain, and enhance Terraform & CloudFormation templates for cloud provisioning. Automate deployment, scaling, and monitoring using AWS-native tools & scripting. Implement and manage CI/CD pipelines for infrastructure and application deployments. Cloud Security & Compliance Enforce standard processes in IAM, encryption, and network security. Ensure compliance with SOC2, ISO27001, and NIST standards. Implement AWS Security Hub, GuardDuty, and WAF for threat detection and response. Monitoring & Performance Optimization Set up AWS CloudWatch, Prometheus, Grafana, and logging solutions for proactive monitoring. Implement autoscaling, load balancing, and caching strategies for performance optimization. Solve cloud infrastructure issues and conduct root cause analysis. Collaboration & DevOps Practices Work closely with software engineers, SREs, and DevOps teams to support deployments. Maintain GitOps standard processes for cloud infrastructure versioning. Support on-call rotation for high-priority cloud incidents. What we expect of you We are all different, yet we all use our unique contributions to serve patients. Basic Qualifications: Masters degree and 4 to 6 years of experience in computer science, IT, or related field with hands-on cloud experience OR Bachelors degree and 6 to 8 years of experience in computer science, IT, or related field with hands-on cloud experience OR Diploma and 10 to 12 years of experience in computer science, IT, or related field with hands-on cloud experience Must-Have Skills: Deep hands-on experience with AWS (EC2, S3, RDS, Lambda, VPC, IAM, ECS/EKS, API Gateway, etc.) . Expertise in Terraform & CloudFormation for AWS infrastructure automation. Strong knowledge of AWS networking (VPC, Direct Connect, Transit Gateway, VPN, Route 53) . Experience with Linux administration, scripting (Python, Bash), and CI/CD tools (Jenkins, GitHub Actions, CodePipeline, etc.) . Strong troubleshooting and debugging skills in cloud networking, storage, and security . Preferred Qualifications: Good-to-Have Skills: Experience with Kubernetes (EKS) and service mesh architectures . Knowledge of AWS Lambda and event-driven architectures . Familiarity with AWS CDK, Ansible, or Packer for cloud automation. Exposure to multi-cloud environments (Azure, GCP) . Familiarity with HPC, DGX Cloud . Professional Certifications (preferred): AWS Certified Solutions Architect Associate or Professional AWS Certified DevOps Engineer Professional Terraform Associate Certification Soft Skills: Strong analytical and problem-solving skills. Ability to work effectively with global, virtual teams Effective communication and collaboration with cross-functional teams. Ability to work in a fast-paced, cloud-first environment.
Posted 3 weeks ago
5.0 - 10.0 years
20 - 35 Lacs
Noida, Pune, Gurugram
Work from Office
We are hiring AWS developer with Banking or Financial Domain experience AWS Developer Location of Job - Noida, Gurugram and Pune Shift Timing - 1:00PM-10:00 PM Job Description Must Have Skills Domain - Financial or Banking Expertise in AWS CDK or terraform, Services(Lambda, ECS, S3) and PostgreSQL DB management. Strong understanding serverless architecture and event-driven design(SNS, SQS). Nice to have: Knowledge of multi-account AWS Setups and Security best practices (IAM, VPC, etc.), Experience in cost optimization strategies in AWS. Interested candidates please share your updated resume on anu.c@irissoftware.com with below details - Current company- Current CTC- Expected CTC- Relevant experience in AWS- Any experience in CDK or Terraform- Notice Period, If serving please share your LWD- Current location- Open for which location Noida, Gurugram and Pune- Open for shift time 01:00 pm to 10:00 pm - Regards, Anu
Posted 3 weeks ago
8.0 - 13.0 years
20 - 35 Lacs
Hyderabad
Remote
Databricks Administrator Azure/AWS | Remote | 6+ Years Job Description: We are seeking an experienced Databricks Administrator with 6+ years of expertise in managing and optimizing Databricks environments. The ideal candidate should have hands-on experience with Azure/AWS Databricks , cluster management, security configurations, and performance optimization. This role requires close collaboration with data engineering and analytics teams to ensure smooth operations and scalability. Key Responsibilities: Deploy, configure, and manage Databricks workspaces, clusters, and jobs . Monitor and optimize Databricks performance, auto-scaling, and cost management . Implement security best practices , including role-based access control (RBAC) and encryption. Manage Databricks integration with cloud storage (Azure Data Lake, S3, etc.) and other data services . Automate infrastructure provisioning and management using Terraform, ARM templates, or CloudFormation . Troubleshoot Databricks runtime issues, job failures, and performance bottlenecks . Support CI/CD pipelines for Databricks workloads and notebooks. Collaborate with data engineering teams to enhance ETL pipelines and data processing workflows . Ensure compliance with data governance policies and regulatory requirements . Maintain and upgrade Databricks versions and libraries as needed. Required Skills & Qualifications: 6+ years of experience as a Databricks Administrator or in a similar role. Strong knowledge of Azure/AWS Databricks and cloud computing platforms . Hands-on experience with Databricks clusters, notebooks, libraries, and job scheduling . Expertise in Spark optimization, data caching, and performance tuning . Proficiency in Python, Scala, or SQL for data processing. Experience with Terraform, ARM templates, or CloudFormation for infrastructure automation. Familiarity with Git, DevOps, and CI/CD pipelines . Strong problem-solving skills and ability to troubleshoot Databricks-related issues. Excellent communication and stakeholder management skills. Preferred Qualifications: Databricks certifications (e.g., Databricks Certified Associate/Professional). Experience in Delta Lake, Unity Catalog, and MLflow . Knowledge of Kubernetes, Docker, and containerized workloads . Experience with big data ecosystems (Hadoop, Apache Airflow, Kafka, etc.). Email : Hrushikesh.akkala@numerictech.com Phone /Whatsapp : 9700111702 For immediate response and further opportunities, connect with me on LinkedIn: https://www.linkedin.com/in/hrushikesh-a-74a32126a/
Posted 3 weeks ago
9.0 - 12.0 years
35 - 40 Lacs
Bengaluru
Work from Office
We are seeking an experienced AWS Architect with a strong background in designing and implementing cloud-native data platforms. The ideal candidate should possess deep expertise in AWS services such as S3, Redshift, Aurora, Glue, and Lambda, along with hands-on experience in data engineering and orchestration tools. Strong communication and stakeholder management skills are essential for this role. Key Responsibilities Design and implement end-to-end data platforms leveraging AWS services. Lead architecture discussions and ensure scalability, reliability, and cost-effectiveness. Develop and optimize solutions using Redshift, including stored procedures, federated queries, and Redshift Data API. Utilize AWS Glue and Lambda functions to build ETL/ELT pipelines. Write efficient Python code and data frame transformations, along with unit testing. Manage orchestration tools such as AWS Step Functions and Airflow. Perform Redshift performance tuning to ensure optimal query execution. Collaborate with stakeholders to understand requirements and communicate technical solutions clearly. Required Skills & Qualifications Minimum 9 years of IT experience with proven AWS expertise. Hands-on experience with AWS services: S3, Redshift, Aurora, Glue, and Lambda . Mandatory experience working with AWS Redshift , including stored procedures and performance tuning. Experience building end-to-end data platforms on AWS . Proficiency in Python , especially working with data frames and writing testable, production-grade code. Familiarity with orchestration tools like Airflow or AWS Step Functions . Excellent problem-solving skills and a collaborative mindset. Strong verbal and written communication and stakeholder management abilities. Nice to Have Experience with CI/CD for data pipelines. Knowledge of AWS Lake Formation and Data Governance practices.
Posted 3 weeks ago
4.0 - 8.0 years
6 - 10 Lacs
Hyderabad
Remote
As a Lead Engineer, you will play a critical role in shaping the technical direction of our projects. You will be responsible for leading a team of developers undertaking Creditsafe s digital transformation to our cloud infrastructure on AWS. Your expertise in Data Engineering, Python and AWS will be crucial in building and maintaining high-performance, scalable, and reliable systems. Key Responsibilities: Technical Leadership: Lead and mentor a team of engineers, providing guidance and support to ensure high-quality code and efficient project delivery. Software Design and Development: Collaborate with cross-functional teams to design and develop data-centric applications, microservices, and APIs that meet project requirements. AWS Infrastructure: Design, configure, and manage cloud infrastructure on AWS, including services like EC2, S3, Lambda, and RDS. Performance Optimization: Identify and resolve performance bottlenecks, optimize code and AWS resources to ensure scalability and reliability. Code Review: Conduct code reviews to ensure code quality, consistency, and adherence to best practices. Security: Implement and maintain security best practices within the codebase and cloud infrastructure. Documentation: Create and maintain technical documentation to facilitate knowledge sharing and onboarding of team members. Collaboration: Collaborate with product managers, architects, and other stakeholders to deliver high-impact software solutions. Research and Innovation: Stay up to date with the latest Python, Data Engineering and AWS technologies, and propose innovative solutions that can enhance our systems. Troubleshooting: Investigate and resolve technical issues and outages as they arise. Qualifications: Bachelor's or higher degree in Computer Science, Software Engineering, or a related field. Proven experience as a Data Engineer with a strong focus on AWS services. Solid experience in leading technical teams and project management. Proficiency in Python, including deep knowledge of data engineering implementation patterns. Strong expertise in AWS services and infrastructure setup. Familiarity with containerization and orchestration tools (e.g., Docker, Kubernetes) is a plus. Excellent problem-solving skills and the ability to troubleshoot complex technical issues. Strong communication and teamwork skills. A passion for staying updated with the latest industry trends and technologies.
Posted 3 weeks ago
7.0 - 10.0 years
6 - 10 Lacs
Mumbai
Work from Office
Summary : Engineering is at the heart of everything we do at Tinvio, translating ideas into products that touch the lives of our customers. As a Senior Software Engineer, you will collaborate with a cross- functional team of talented designers, product managers, and engineers to solve complex problems in an open and fast- paced environment with very flat structures where everyone has a say. In this role, you will carve out an unmatched user experience for our customers by implementing robust frontend technology solutions around order management, account management, credit, and payment domains that fuel our core business. In addition, you will have the opportunity to work on the complete stack using technologies like ReactJs, NodeJs, JavaScript libraries, Design libraries, Bable, Webpack, Redux, Restful APIs, CloudFront, S3 and, Firebase in the cloud environment. Join us in building the next- gen B2B transactions platform for merchants and suppliers across the region. Responsibilities : - Design, build, and maintain a high- performance, high- availability, and fraud- tolerant technology platform for B2B transactions with minimal guidance - Create pixel perfect and trustworthy frontend experiences that will be functional, reliable, and delightfully easy to use for thousands of users across the region - Implement API integrations that are stable, backward compatible, and reliable, and build maintainable UI components that are scalable and extensible - Contribute cross- functionally in the software development cycle, by evaluating UX, collaborating with back- end engineers, or participating in technical architecture decisions - Effectively communicate your technical solutions and product ideas within the engineering teams, and the broader product organization - Articulate a long- term technical direction for maintaining and scaling our web applications, and propose technology choices with research and experimentation Requirement : - Bachelor's degree in computer science / information technology / similar field - Strong problem- solving skills and experience in application debugging - Minimum 7+ years of professional experience in front- end engineering - Must be able to develop high- quality deliverables, scalable applications within defined timelines - Hands- on working experience using JavaScript, CSS & HTML. - Hands- on working experience in frontend tools including ReactJS, Babel, NPM, Webpack, Redux - Familiarity with RESTful APIs - Knowledge of modern user authorization mechanisms, such as JSON Web Token (JWT) - Knowledge of performance testing frameworks including Mocha and Jest - Experience in browser- based debugging and software performance testing Benefits : Join our team of Generous perks, awesome open office (or remote) culture, fair compensation to help you work better, and equity for long- term financial gains
Posted 3 weeks ago
5.0 - 8.0 years
8 - 13 Lacs
Pune
Work from Office
We are staffing small, self-contained development teams with people who love solving problems, building high quality products and services. We use a wide range of technologies and are building up a next generation microservices platform that can make our learning tools and content available to all our customers. If you want to make a difference in the lives of students and teachers and understand what it takes to deliver high quality software, we would love to talk to you about this opportunity. Technology Stack You'll work with technologies such as Java, Spring Boot, Kafka, Aurora, Mesos, Jenkins etc. This will be a hands-on coding role working as part of a cross-functional team alongside other developers, designers and quality engineers, within an agile development environment. Were working on the development of our next generation learning platform and solutions utilizing the latest in server and web technologies. Responsibilities: Build high-quality, clean, scalable, and reusable code by enforcing best practices around software engineering architecture and processes (Code Reviews, Unit testing, etc.) on the team. Work with the product owners to understand detailed requirements and own your code from design, implementation, test automation and delivery of high-quality product to our users. Drive the design, prototype, implementation, and scale of cloud data platforms to tackle business needs. Identify ways to improve data reliability, efficiency, and quality. Plan and perform development tasks from design specifications. Provide accurate time estimates for development tasks. Construct and verify (unit test) software components to meet design specifications. Perform quality assurance functions by collaborating with the cross-team members to identify and resolve software defects. Provide mentoring on software design, construction, development methodologies, and best practices. Participate in production support and on-call rotation for the services owned by the team. Mentors less experienced engineers in understanding the big picture of company objectives, constraints, inter-team dependencies, etc. Participate in creating standards and ensuring team members adhere to standards, such as security patterns, logging patterns, etc. Collaborate with project architects and cross-functional team members/vendors in different geographical locations and assist team members to prove the validity of new software technologies. Promote AGILE processes among development and the business, including facilitation of scrums. Have ownership over the things you build, help shape the product and technical vision, direction, and how we iterate. Work closely with your product and design teammates for improved stability, reliability, and quality. Perform other duties as assigned to ensure the success of the team and the entire organization. Run numerous experiments in a fast-paced, analytical culture so you can quickly learn and adapt your work. Promote a positive engineering culture through teamwork, engagement, and empowerment. Function as the tech lead for various features and initiatives on the team. Build and maintain CI/CD pipelines for services owned by team by following secure development practices. Skills & Experience: 5 to 8 years' experience in a relevant software development role Excellent object-oriented design & programming skills, including the application of design patterns and avoidance of anti-patterns. Strong Cloud platform skills: AWS Lambda, Terraform, SNS, SQS, RDS, Kinesis, DynamoDB etc. Experience building large-scale, enterprise applications with ReactJS/AngularJS. Proficient with front-end technologies, such as HTML, CSS, JavaScript preferred. Experience working in a collaborative team of application developers and source code repositories. Deep knowledge of more than one programming language like Node.js/Java. Demonstrable knowledge of AWS and Data Platform experience: Lambda, Dynamodb, RDS, S3, Kinesis, Snowflake. Demonstrated ability to follow through with all tasks, promises and commitments. Ability to communicate and work effectively within priorities. Ability to advocate ideas and to objectively participate in design critiques. Ability to work under tight timelines in a fast-paced environment. Advanced understanding of software design concepts. Understanding software development methodologies and principles. Ability to solve large scale complex problems. Ability to architect, design, implement, and maintain large scale systems. Strong technical leadership and mentorship ability. Working experience of modern Agile software development methodologies (i.e. Kanban, Scrum, Test Driven Development)
Posted 3 weeks ago
5.0 - 10.0 years
20 - 30 Lacs
Pune, Ahmedabad, Mumbai (All Areas)
Hybrid
Strong experience in Python, MongoDB. Hands on experience developing code in Python, and MongoDB. Have experience in Identity Management, API Management, Security, Tokenization and Microservices domains. Required Candidate profile Django, Shell Scripting, Python, Maven, ReactJS, Redux, Hooks, Storybook, Jquery, Typescript, HTML5, CSS3. Docker, AWS (EC2, S3, RDB, LB), Python, AWS Lambda Function, Serverless Architecture
Posted 3 weeks ago
4.0 - 6.0 years
6 - 8 Lacs
Hyderabad
Work from Office
Manager Information Systems What you will do In this vital role you will responsible for designing, developing, and maintaining software applications and solutions that meet business needs and ensuring the availability and performance of critical systems and applications. This role involves working closely with product managers, designers, and other engineers to create high-quality, scalable software solutions and automating operations, monitoring system health, and responding to incidents to minimize downtime. Roles & Responsibilities: Lead the design, configure and deployment of the validation systems (such as Veeva Validation Management, ALM, and KNEAT) ensuring scalability, maintainability, and performance optimization. Implement standard methodologies, conduct code review and provide technical guidance, mentorship to junior developers. Take ownership of complex software projects from conception to deployment. Manage software delivery scope, risk, and timeline Collaborate closely with business collaborators to discuss requirements, ensuring alignment between technical capabilities and business objectives, while effectively communicating any technical limitations or constraints. Contribute to both front-end and back-end development using cloud technology. Create and maintain documentation on software architecture, design, deployment, disaster recovery, and operations. Ensure all documents are compliant with 21 CFR Part 11, Annex 11, and other relevant regulations. Identify and resolve technical challenges effectively. Stay updated with the latest trends and advancements. Work with Product Owners, Service Owners and/or delivery teams to ensure that delivery matches commitments, acting as a partner concern point and facilitating communication when service commitments are not met. Work closely with multi-functional teams, including product management, design, and QA, to deliver high-quality software on time. Develop talent, motivate the team, delegate effectively, champion diversity within the team and act as a role model of servant leadership. What we expect of you We are all different, yet we all use our unique contributions to serve patients. Basic Qualifications: Masters degree and 4 to 6 years of related field experience OR Bachelors degree and 6 to 8 years of related field experience OR Diploma and 10 to 12 years of related field experience Solid technical background, including understanding software development processes, databases, and cloud-based systems. Experience configuring the SaaS systems such as Veeva or ALM. Experience with Document management system, Service now suite, ALM, JIRA, Veeva Platform, GenAI. Agile/Scrum experience with demonstrated success managing product backlogs and delivering iterative product improvements. Preferred Qualifications: Understanding of Veeva Quality Vault/ALM/KNEAT. Curiosity of modern technology domain and learning agility Experience with the following technologies Veeva Vault, MuleSoft, AWS (Amazon Web Services) Services (DynamoDB, EC2, S3, etc.), Application Programming Interface (API) integration and Structured Query Language (SQL) will be a big plus. Excellent communication skills, with the ability to convey complex technical concepts. As we work to develop treatments that take care of others, we also work to care for your professional and personal growth and well-being. From our competitive benefits to our collaborative culture, well support your journey every step of the way.
Posted 3 weeks ago
5.0 - 10.0 years
8 - 13 Lacs
Pune
Work from Office
We are seeking a highly skilled Senior Infrastructure Engineer with expertise in Windows and Linux operating systems, Azure and AWS cloud platforms, and enterprise-grade infrastructure solutions. The ideal candidate will have a proven track record of designing, implementing, and managing complex infrastructure in a dynamic environment. Key Responsibilities: 1. Systems and Infrastructure Management: Design, deploy, and maintain Windows and Linux-based systems in both on-premises and cloud environments. Ensuring the reliability, security, and performance of the systems. Administer core services such as Active Directory, DHCP, DNS, and Group Policy. 2. Cloud Infrastructure: Architect, implement, and manage Azure infrastructure components, including virtual machines, virtual networks, storage accounts, Azure AD(Entra ID), and enterprise app registrations. Manage AWS cloud resources, including EC2 instances, S3 buckets, RDS databases, and IAM roles. 3. Virtualization and Storage Solutions: Utilize VMware and Windows hypervisor technologies to design, deploy, and manage virtualized environments for performance and scalability. Design and maintain enterprise backup and storage solutions, including Rubrik and PureStorage. 4. Automation and Orchestration: Automate infrastructure provisioning, configuration, and deployment using tools like Terraform, Ansible, PowerShell, and Azure Automation. 5. Monitoring and Performance Optimization: Monitor system performance using tools such as DataDog and LogicMonitor. Implement performance tuning measures to enhance infrastructure reliability and efficiency. 6. Security and Compliance: Implement security best practices, access controls, and compliance measures in line with SOC2 and SOX standards. 7. Collaboration and Innovation: Work with cross-functional teams to design scalable, secure, and highly available infrastructure solutions. Share expertise and mentor team members through documentation and training sessions. 8. Disaster Recovery: Develop and maintain backup strategies and disaster recovery plans for critical systems and data. Continuous Learning: Stay current with emerging technologies and industry best practices related to infrastructure engineering, cloud computing, and virtualization. Qualifications: Bachelors degree in Computer Science, Information Technology, or a related field. 5+ years of experience in infrastructure engineering with a focus on Windows and Linux operating systems and cloud platforms (Azure and AWS). Strong proficiency in Windows server administration, Linux system administration, and performance tuning. Hands-on experience with Azure services such as VMs, Azure Networking, Azure Storage, and enterprise app registrations. Familiarity with AWS services and tools, including EC2, S3, RDS, VPC, and CloudFormation. Expertise in VMware vSphere, ESXi, and vCenter. Experience with scripting and automation tools like PowerShell, Python, and Bash. Excellent communication and collaboration skills. Strong problem-solving abilities and adaptability in a fast-paced environment. Certifications (Preferred): Microsoft Certified: Azure Administrator Associate (AZ-104) or equivalent. AWS Certified Solutions Architect - Associate or equivalent. VMware Certified Professional (VCP).
Posted 3 weeks ago
5.0 - 10.0 years
15 - 30 Lacs
Hyderabad, Pune, Bengaluru
Work from Office
EPAM has presence across 40+ countries globally with 55,000 + professionals & numerous delivery centers, Key locations are North America, Eastern Europe, Central Europe, Western Europe, APAC, Mid East & Development Centers in India (Hyderabad, Pune & Bangalore). Location: Gurgaon/Pune/Hyderabad/Bengaluru/Chennai Work Mode: Hybrid (2-3 days office in a week) Job Description: 5-14 Years of in Big Data & Data related technology experience Expert level understanding of distributed computing principles Expert level knowledge and experience in Apache Spark Hands on programming with Python Proficiency with Hadoop v2, Map Reduce, HDFS, Sqoop Experience with building stream-processing systems, using technologies such as Apache Storm or Spark-Streaming Good understanding of Big Data querying tools, such as Hive, and Impala Experience with integration of data from multiple data sources such as RDBMS (SQL Server, Oracle), ERP, Files Good understanding of SQL queries, joins, stored procedures, relational schemas Experience with NoSQL databases, such as HBase, Cassandra, MongoDB Knowledge of ETL techniques and frameworks Performance tuning of Spark Jobs Experience with native Cloud data services AWS Ability to lead a team efficiently Experience with designing and implementing Big data solutions Practitioner of AGILE methodology WE OFFER Opportunity to work on technical challenges that may impact across geographies Vast opportunities for self-development: online university, knowledge sharing opportunities globally, learning opportunities through external certifications Opportunity to share your ideas on international platforms Sponsored Tech Talks & Hackathons Possibility to relocate to any EPAM office for short and long-term projects Focused individual development Benefit package: • Health benefits, Medical Benefits• Retirement benefits• Paid time off• Flexible benefits Forums to explore beyond work passion (CSR, photography, painting, sports, etc
Posted 3 weeks ago
0.0 - 5.0 years
3 - 6 Lacs
Hyderabad
Work from Office
Associate IS Bus Sys Analyst What you will do In this vital role you will we are seeking a highly skilled Associate IS Bus Sys Analyst to join our team. You will be responsible for designing, developing, and maintaining software applications and solutions that meet business needs and ensuring the availability and performance of critical systems and applications. This role involves working closely with product managers, designers, and other engineers to create high-quality, scalable software solutions and automating operations, monitoring system health, and responding to incidents to minimize downtime. Roles & Responsibilities: Maintain existing code and configurationSupport and maintain SaaS applications Development & DeploymentDevelop, test, and deploy code based on designs created with the guidance of senior team members. Implement solutions following standard methodologies for code structure and efficiency. DocumentationGenerate clear and concise code documentation for new and existing features to ensure smooth handovers and easy future reference. Collaborative DesignWork closely with team members and collaborators to understand project requirements and translate them into functional technical designs. Code Reviews & Quality AssuranceParticipate in peer code reviews, providing feedback on adherence to standard methodologies, and ensuring high code quality and maintainability. Testing & DebuggingAssist in writing unit and integration tests to validate new features and functionalities. Support fix and debugging efforts for existing systems to resolve bugs and performance issues. Perform application support and administration tasks such as periodic review, manage incident response and resolution, and security reviews. Continuous LearningStay up-to-date with the newest technologies and standard methodologies, with a focus on expanding knowledge in cloud services, automation, and secure software development. What we expect of you Must-Have Skills: Solid technical background, including understanding software development processes, databases, and cloud-based systems. Experience configuring SaaS applications (such as Veeva). Experience working with databases (SQL/NoSQL). Strong foundational knowledge of testing methodologies. Good-to-Have Skills: Understanding of Veeva Quality Vault/ALM/KNEAT. Curiosity of modern technology domain and learning agility Experience with the following technologies Veeva Vault, MuleSoft, AWS (Amazon Web Services) Services (DynamoDB, EC2, S3, etc.), Application Programming Interface (API) integration and Structured Query Language (SQL) will be a big plus. Superb communication skills, with the ability to convey complex technical concepts. Qualification: Bachelors Degree and 0to 3 years of experience in software development processes, databases, and cloud-based systems Diploma and 4 to 7 years of experience in software development processes, databases, and cloud-based systems
Posted 3 weeks ago
1.0 - 6.0 years
3 - 7 Lacs
Hyderabad
Work from Office
What you will do As a Business Intelligence Engineer, you will solve unique and complex problems at a rapid pace, utilizing the latest technologies to create solutions that are highly scalable. This role involves working closely with product managers, designers, and other engineers to create high-quality, scalable solutions and responding to requests for rapid releases of analytical outcomes. Design, develop, and maintain interactive dashboards, reports, and data visualizations using BI tools (e.g., Power BI, Tableau, Cognos, others). Analyse datasets to identify trends, patterns, and insights that inform business strategy and decision-making. Partner with leaders and stakeholders across Finance, Sales, Customer Success, Marketing, Product, and other departments to understand their data and reporting requirements. Stay abreast of the latest trends and technologies in business intelligence and data analytics, inclusive of AI use in BI. Elicit and document clear and comprehensive business requirements for BI solutions, translating business needs into technical specifications and solutions. Collaborate with Data Engineers to ensure efficient up-system transformations and create data models/views that will hydrate accurate and reliable BI reporting. Contribute to data quality and governance efforts to ensure the accuracy and consistency of BI data. What we expect of you Basic Qualifications: Masters degree and 1 to 3 years of Computer Science, IT or related field experience OR Bachelors degree and 3 to 5 years of Computer Science, IT or related field experience OR Diploma and 7 to 9 years of Computer Science, IT or related field experience Functional Skills: 1+ years of analyzing and interpreting data with Redshift, Oracle, NoSQL etc. experience Experience with data visualization using Tableau, Quicksight, or similar tools Experience with data modeling, warehousing and building ETL pipelines Experience using SQL to pull data from a database or data warehouse and scripting experience (Python) to process data for modeling Preferred Qualifications: Experience with AWS solutions such as EC2, DynamoDB, S3, and Redshift Experience in data mining, ETL, etc. and using databases in a business environment with large-scale, complex datasets AWS Developer certification (preferred) Soft Skills: Excellent analytical and troubleshooting skills Strong verbal and written communication skills Ability to work effectively with global, virtual teams High degree of initiative and self-motivation Ability to manage multiple priorities successfully Team-oriented, with a focus on achieving team goals Strong presentation and public speaking skills
Posted 3 weeks ago
5.0 - 8.0 years
10 - 15 Lacs
Kochi
Remote
We are looking for a skilled AWS Cloud Engineer with a minimum of 5 years of hands-on experience in managing and implementing cloud-based solutions on AWS. The ideal candidate will have expertise in AWS core services such as S3, EC2, MSK, Glue, DMS, and SageMaker, along with strong programming and containerization skills using Python and Docker.Design, implement, and manage scalable AWS cloud infrastructure solutions. Hands-on experience with AWS services: S3, EC2, MSK, Glue, DMS, and SageMaker. Develop, deploy, and maintain Python-based applications in cloud environments. Containerize applications using Docker and manage deployment pipelines. Troubleshoot infrastructure and application issues, review designs, and code solutions. Ensure high availability, performance, and security of cloud resources. Collaborate with cross-functional teams to deliver reliable and scalable solutions.
Posted 3 weeks ago
9.0 - 14.0 years
20 - 30 Lacs
Bengaluru
Hybrid
My profile :- linkedin.com/in/yashsharma1608 Hiring manager profile :- on payroll of - https://www.nyxtech.in/ Clinet : Brillio PAYROLL AWS Architect Primary skills Aws (Redshift, Glue, Lambda, ETL and Aurora), advance SQL and Python , Pyspark Note : -Aurora Database mandatory skill Experience 9 + yrs Notice period Immediate joiner Location Any Brillio location (Preferred is Bangalore) Budget – 30 LPA Job Description: year of IT experiences with deep expertise in S3, Redshift, Aurora, Glue and Lambda services. Atleast one instance of proven experience in developing Data platform end to end using AWS Hands-on programming experience with Data Frames, Python, and unit testing the python as well as Glue code. Experience in orchestrating mechanisms like Airflow, Step functions etc. Experience working on AWS redshift is Mandatory. Must have experience writing stored procedures, understanding of Redshift data API and writing federated queries Experience in Redshift performance tunning.Good in communication and problem solving. Very good stakeholder communication and management
Posted 3 weeks ago
12.0 - 17.0 years
14 - 19 Lacs
Hyderabad
Work from Office
W e are seeking a highly skilled , hands -on and technically proficient Test Automation Engineering Manager with strong experience in data quality , data integration , and a specific focus on semantic layer validation . This role combines technical ownership of automated data testing solutions with team leadership responsibilities, ensuring that the data infrastructure across platforms remains accurate , reliable, and high performing . As a leader in the QA and Data Engineering space, you will be responsible for building robust automated testing frameworks, validating GraphQL -based data layers, and driving the teams technical growth. Your work will ensure that all data flows, transformations, and API interactions meet enterprise-grade quality standards across the data lifecycle. Y ou will be responsible for the end-to-end design and development of test automation frameworks, working collaboratively with your team. As the delivery owner for test automation, your primary responsibilities will include building and automating comprehensive validation frameworks for semantic layer testing, GraphQL API validation, and schema compliance , ensuring alignment with data quality, performance, and integration reliability standards. You will also work closely with data engineers, product teams, and platform architects to validate data contracts and integration logic, supporting the integrity and trustworthiness of enterprise data solutions. This is a highly technical and hands-on role, with strong emphasis on automation, data workflow validation , and the seamless integration of testing practices into CI/CD pipelines . Roles & Responsibilities: Design and implement robust data validation frameworks focused on the semantic layer, ensuring accurate data model, schema compliance, and contract adherence across services and platforms. Build and automate end-to-end data pipeline validations across ingestion, transformation, and consumption layers using Databricks, Apache Spark, and AWS services such as S3, Glue, Athena, and Lake Formation. Lead test automation initiatives by developing scalable, modular test frameworks and embedding them into CI/CD pipelines for continuous validation of semantic models, API integrations, and data workflows. Validate GraphQL APIs by testing query/mutation structures, schema compliance, and end-to-end integration accuracy using tools like Postman, Python, and custom test suites. Oversee UI and visualization testing for tools like Tableau, Power BI, and custom front-end dashboards, ensuring consistency with backend data through Selenium with Python and backend validations. Define and drive the overall QA strategy with emphasis on performance, reliability, and semantic data accuracy, while setting up alerting and reporting mechanisms for test failures, schema issues, and data contract violations. Collaborate closely with product managers, data engineers, developers, and DevOps teams to align quality assurance initiatives with business goals and agile release cycles. Actively contribute to architecture and design discussions, ensuring quality and testability are embedded from the earliest stages of development. Mentor and manage QA engineers, fostering a collaborative environment focused on technical excellence, knowledge sharing, and continuous professional growth. Must-Have Skills: Team Leadership Experience is also required. Strong 6+ years of experience in Requested Data Ops/Testing is required 7+ to 12 years of Overall experience is expected in Test Automation. Strong experience in designing and implementing test automation frameworks integrated with CI/CD pipelines. Expertise in validating data pipelines at the syntactic layer, including schema checks, null/duplicate handling, and transformation validation. Hands-on experience with Databricks, Apache Spark, and AWS services (S3, Glue, Athena, Lake Formation). Proficiency in Python, PySpark, and SQL for writing validation scripts and automation logic. Solid understanding of GraphQL APIs, including schema validation and query/mutation testing. Experience with API testing tools like Postman and Python-based test frameworks. Proficient in UI and visualization testing using Selenium with Python, especially for tools like Tableau, Power BI, or custom dashboards. Familiarity with CI/CD tools such as Jenkins, GitHub Actions, or GitLab CI for test orchestration. Ability to implement alerting and reporting for test failures, anomalies, and validation issues. Strong background in defining QA strategies and leading test automation initiatives in data-centric environments. Excellent collaboration and communication skills, with the ability to work closely with cross-functional teams in Agile settings. Mentor and manage QA engineers, fostering a collaborative environment focused on technical excellence, knowledge sharing, and continuous professional growth. Good-to-Have Skills: Experience with data governance tools such as Apache Atlas, Collibra, or Alation Understanding of DataOps methodologies and practices Contributions to internal quality dashboards or data observability systems Awareness of metadata-driven testing approaches and lineage-based validations Experience working with agile Testing methodologies such as Scaled Agile. Familiarity with automated testing frameworks like Selenium, JUnit, TestNG, or PyTest. Education and Professional Certifications Bachelors/Masters degree in computer science and engineering preferred. Soft Skills: Excellent analytical and troubleshooting skills. Strong verbal and written communication skills Ability to work effectively with global, virtual teams High degree of initiative and self-motivation. Ability to manage multiple priorities successfully. Team-oriented, with a focus on achieving team goals Strong presentation and public speaking skills. EQUAL OPPORTUNITY STATEMENT We provide reasonable accommodations for individuals with disabilities during the application, interview process, job functions, and employment benefits. Contact us to request an accommodation .
Posted 3 weeks ago
6.0 - 9.0 years
8 - 11 Lacs
Hyderabad
Work from Office
Role Description: We are seeking a highly skilled, hands-on Senior QA & Test Automation Specialist (Test Automation Engineer)with strong experience in data validation , ETL testing , test automation , and QA process ownership . This role combines deep technical execution with a solid foundation in QA best practices including test planning, defect tracking, and test lifecycle management . You will be responsible for designing and executing manual and automated test strategies for complex real-time and batch data pipelines , contributing to the design of automation frameworks , and ensuring high-quality data delivery across our AWS and Databricks-based analytics platforms . The role is highly technical and hands-on , with a strong focus on automation, metadata validation , and ensuring data governance practices are seamlessly integrated into development pipelines. Roles & Responsibilities: Collaborate with the QA Manager to design and implement end-to-end test strategies for data validation, semantic layer testing, and GraphQL API validation. Perform manual validation of data pipelines, including source-to-target data mapping, transformation logic, and business rule verification. Develop and maintain automated data validation scripts using Python and PySpark for both real-time and batch pipelines. Contribute to the design and enhancement of reusable automation frameworks, with components for schema validation, data reconciliation, and anomaly detection. Validate semantic layers (e.g., Looker, dbt models) and GraphQL APIs, ensuring data consistency, compliance with contracts, and alignment with business expectations. Write and manage test plans, test cases, and test data for structured, semi-structured, and unstructured data. Track, manage, and report defects using tools like JIRA, ensuring thorough root cause analysis and timely resolution. Collaborate with Data Engineers, Product Managers, and DevOps teams to integrate tests into CI/CD pipelines and enable shift-left testing practices. Ensure comprehensive test coverage for all aspects of the data lifecycle, including ingestion, transformation, delivery, and consumption. Participate in QA ceremonies (standups, planning, retrospectives) and continuously contribute to improving the QA process and culture. Experience building or maintaining test data generators Contributions to internal quality dashboards or data observability systems Awareness of metadata-driven testing approaches and lineage-based validations Experience working with agile Testing methodologies such as Scaled Agile. Familiarity with automated testing frameworks like Selenium, JUnit, TestNG, or PyTest. Must-Have Skills: 69 years of experience in QA roles, with at least 3+ years of strong exposure to data pipeline testing and ETL validation. Strong in SQL, Python, and optionally PySpark comfortable with writing complex queries and validation scripts. Practical experience with manual validation of data pipelines and source-to-target testing. Experience in validating GraphQL APIs, semantic layers (Looker, dbt, etc.), and schema/data contract compliance. Familiarity with data integration tools and platforms such as Databricks, AWS Glue, Redshift, Athena, or BigQuery. Strong understanding of test planning, defect tracking, bug lifecycle management, and QA documentation. Experience working in Agile/Scrum environments with standard QA processes. Knowledge of test case and defect management tools (e.g., JIRA, TestRail, Zephyr). Strong understanding of QA methodologies, test planning, test case design, and defect lifecycle management. Deep hands-on expertise in SQL, Python, and PySpark for testing and automating validation. Proven experience in manual and automated testing of batch and real-time data pipelines. Familiarity with data processing and analytics stacks: Databricks, Spark, AWS (Glue, S3, Athena, Redshift). Experience with bug tracking and test management tools like JIRA, TestRail, or Zephyr. Ability to troubleshoot data issues independently and collaborate with engineering for root cause analysis. Experience integrating automated tests into CI/CD pipelines (e.g., Jenkins, GitHub Actions). Experience validating data from various file formats such as JSON, CSV, Parquet, and Avro Strong ability to validate and automate data quality checks: schema validation, null checks, duplicates, thresholds, and transformation validation Hands-on experience with API testing using Postman, pytest, or custom automation scripts Good-to-Have Skills: Experience with data governance tools such as Apache Atlas, Collibra, or Alation Familiarity with monitoring/observability tools such as Datadog, Prometheus, or Cloud Watch Education and Professional Certifications Bachelors/Masters degree in computer science and engineering preferred. Soft Skills: Excellent analytical and troubleshooting skills. Strong verbal and written communication skills Ability to work effectively with global, virtual teams High degree of initiative and self-motivation. Ability to manage multiple priorities successfully. Team-oriented, with a focus on achieving team goals Strong presentation and public speaking skills.
Posted 3 weeks ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
23751 Jobs | Dublin
Wipro
12469 Jobs | Bengaluru
EY
8625 Jobs | London
Accenture in India
7339 Jobs | Dublin 2
Uplers
7127 Jobs | Ahmedabad
Amazon
6778 Jobs | Seattle,WA
IBM
6514 Jobs | Armonk
Oracle
6388 Jobs | Redwood City
Muthoot FinCorp (MFL)
5532 Jobs | New Delhi
Capgemini
4741 Jobs | Paris,France