Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
5.0 - 10.0 years
10 - 20 Lacs
Bangalore Rural, Bengaluru
Hybrid
Responsibilities 3+ years of hands on application to lead and perform the development with experience in one or more programming languages like Python, Pyspark etc. 4+ years of hands on experience in development and deployment of cloud native solutions leveraging AWS Services: Compute(EC2, Lambda), Storage (S3), Database (RDS, Aurora, Postgres, DynamoDB), Orchestration (Apache Airflow, Step Function, SNS), ETL/Analytics(Glue, EMR, Athena, Redshift), Infra (Cloud Formation, Code Pipeline), Data Migration (AWS DataSync, AWS DMS), APIGateway, IAM etc. Expertise in the handling large data sets and data models in terms of design, data model creation, development of data pipeline for data ingestion, migration and transformation Strong on SQL Server, stored procedure Knowledge on API's , SSO, streaming technology will be nice to have Mandatory Skill Sets AWS, Pyspark, Spark Glue, Lambda Years Of Experience Required - 5+ Years. Education Qualification B.Tech / M.Tech / MBA / MCA
Posted 1 month ago
5.0 - 9.0 years
0 Lacs
pune, maharashtra
On-site
As a Lead Data Engineer at Mastercard, you will be a key player in the Mastercard Services Technology team, responsible for driving the mission to unlock the potential of data assets by innovating, managing big data assets, ensuring accessibility of data, and enforcing standards and principles in the Big Data space. Your role will involve designing and building scalable, cloud-native data platforms using PySpark, Python, and modern data engineering practices. You will mentor and guide other engineers, foster a culture of curiosity and continuous improvement, and create robust ETL/ELT pipelines that integrate with various systems. Your responsibilities will include decomposing complex problems into scalable components aligned with platform goals, championing best practices in data engineering, collaborating across teams, supporting data governance and quality efforts, and optimizing cloud infrastructure components related to data engineering workflows. You will actively participate in architectural discussions, iteration planning, and feature sizing meetings while adhering to Agile processes. To excel in this role, you should have at least 5 years of hands-on experience in data engineering with strong PySpark and Python skills. You must possess solid experience in designing and implementing data models, pipelines, and batch/stream processing systems. Additionally, a strong foundation in data modeling, database design, and performance optimization is required. Experience working with cloud platforms like AWS, Azure, or GCP and knowledge of modern data architectures and data lifecycle management are essential. Furthermore, familiarity with CI/CD practices, version control, and automated testing is crucial. You should demonstrate the ability to mentor junior engineers effectively, possess excellent communication and collaboration skills, and hold a Bachelor's degree in computer science, Engineering, or a related field. Comfort with Agile/Scrum development environments, curiosity, adaptability, problem-solving skills, and a drive for continuous improvement are key traits for success in this role. Experience with integrating heterogeneous systems, building resilient data pipelines across cloud environments, orchestration tools, data governance practices, containerization, infrastructure automation, and exposure to machine learning data pipelines or MLOps will be advantageous. Holding a Master's degree, relevant certifications, or contributions to open-source/data engineering communities will be a bonus.,
Posted 1 month ago
8.0 - 12.0 years
10 - 15 Lacs
Pune
Work from Office
Key Responsibilities Design and develop scalable applications using Python and AWS services Debug and resolve production issues across complex distributed systems Architect solutions aligned with business strategies and industry standards Lead and mentor a team of India-based developers; guide career development Ensure technical deliverables meet highest standards of quality and performance Research and integrate emerging technologies and processes into development strategy Document solutions in compliance with SDLC standards using defined templates Assemble large, complex datasets based on functional and non-functional requirements Handle operational issues and recommend improvements in technology stack Facilitate end-to-end platform integration across enterprise-level applications Required Skills Technical Skills Cloud & Architecture Tools & Processes Python AWS (EC2, EKS, Glue, Lambda, S3, EMR, RDS, API Gateway) Terraform, CI/CD pipelines Data Engineering Step Functions, CloudFront EventBridge, ARRFlow, Airflow (MWAA), Quicksight Debugging & Troubleshooting System Integration SDLC, Documentation Templates Qualifications 10+ years of software development experience, preferably in financial/trading applications 5+ years of people management and mentoring experience Proven track record in technical leadership and architecture planning Expertise in developing applications using Python and AWS stack Strong grasp of Terraform and automated CI/CD processes Exceptional multitasking and prioritization capabilities
Posted 1 month ago
5.0 - 9.0 years
0 Lacs
indore, madhya pradesh
On-site
As a Senior Data Scientist with 5+ years of experience, you will be responsible for designing and implementing models, mining data for insights, and interpreting complex data structures to drive business decision-making. Your expertise in machine learning, including areas such as NLP, Machine vision, and Time series, will be essential in this role. You will be expected to have strong skills in Model Tuning, Model Validation, Supervised and Unsupervised Learning, and hands-on experience with model development, data preparation, training, and inference-ready deployment of models. Your proficiency in descriptive and inferential statistics, hypothesis testing, and data analysis will help in developing code for reproducible analysis of data. Experience with AWS services like Sagemaker, Lambda, Glue, Step functions, and EC2 is necessary, along with knowledge of Databricks, Anaconda distribution, and similar data science code development and deployment IDEs. Your familiarity with ML algorithms related to time-series, natural language processing, optimization, object detection, topic modeling, clustering, and regression analysis will be highly valued. You should have expertise in Hive/Impala, Spark, Python, Pandas, Keras, SKLearn, StatsModels, Tensorflow, and PyTorch. End-to-end model deployment and production experience of at least 1 year is required, along with a good understanding of Model Deployment in Azure ML platform, Anaconda Enterprise, or AWS Sagemaker. Basic knowledge of deep learning algorithms such as MaskedCNN, YOLO, and familiarity with Visualization and analytics/Reporting Tools like Power BI, Tableau, and Alteryx will be considered advantageous for this role.,
Posted 1 month ago
2.0 - 6.0 years
0 Lacs
karnataka
On-site
You should possess a Bachelors/Master's degree in Computer Science/Computer Engineering or a related field. Along with this, you must have at least 2-6 years of experience in server-side development using languages like GoLang, Node.JS, or Python. Furthermore, it is essential to have proficiency in AWS services such as Lambda, DynamoDB, Step Functions, S3, etc. and hands-on experience in deploying and managing Serverless service environments. Experience with Docker, containerization, and Kubernetes is also required for this role. In addition, knowledge of database technologies like MongoDB and DynamoDB, along with experience in CI/CD pipeline and automation, would be beneficial. Experience in Video Transcoding/Streaming on Cloud would be considered a plus. Lastly, strong problem-solving skills are a must-have for this position.,
Posted 1 month ago
6.0 - 10.0 years
10 - 20 Lacs
Pune
Hybrid
Must Have: Strong programming skills with over 6 years of experience. Proficient Java engineer with solid knowledge of Java, multithreading, and object-oriented programming (OOP) concepts. Tech-savvy and eager to learn new skills; actively tracks industry trends. Hands-on experience with AWS services including EC2, ECS, Lambda, API Gateway, SQS, DynamoDB, SNS, S3, Redis, and CloudWatch. Good understanding of event-driven architecture with practical experience using Kafka or similar tools. Strong grasp of web application fundamentals and experience in developing web services (REST or SOAP) within microservices architecture and domain-driven design. Familiar with core Spring Framework concepts such as IoC, DI, Spring Boot, Spring Security, and other Spring modules. Experience with API gateways like Apigee or similar platforms. Proficient in using tools such as Git, Jenkins, SonarQube, SignalFx, and others. Experience with centralized logging tools like Splunk or similar. Familiar with microservices and distributed system architectures.
Posted 1 month ago
10.0 - 14.0 years
0 Lacs
pune, maharashtra
On-site
As a Senior Cloud Data Integration Consultant, you will be responsible for leading a complex data integration project that involves API frameworks, a data lakehouse architecture, and middleware solutions. The project focuses on technologies such as AWS, Snowflake, Oracle ERP, and Salesforce, with a high transaction volume POS system. Your role will involve building reusable and scalable API frameworks, optimizing middleware, and ensuring security and compliance in a multi-cloud environment. Your expertise in API development and integration will be crucial for this project. You should have deep experience in managing APIs across multiple systems, building reusable components, and ensuring bidirectional data flow for real-time data synchronization. Additionally, your skills in middleware solutions and custom API adapters will be essential for integrating various systems seamlessly. In terms of cloud infrastructure and data processing, your strong experience with AWS services like S3, Lambda, Fargate, and Glue will be required for data processing, storage, and integration. You should also have hands-on experience in optimizing Snowflake for querying and reporting, as well as knowledge of Terraform for automating the provisioning and management of AWS resources. Security and compliance are critical aspects of the project, and your deep understanding of cloud security protocols, API security, and compliance enforcement will be invaluable. You should be able to set up audit logs, ensure traceability, and enforce compliance across cloud services. Handling high-volume transaction systems and real-time data processing requirements will be part of your responsibilities. You should be familiar with optimizing AWS Lambda and Fargate for efficient data processing and be skilled in operational monitoring and error handling mechanisms. Collaboration and support are essential for the success of the project. You will need to provide post-go-live support, collaborate with internal teams and external stakeholders, and ensure seamless integration between systems. To qualify for this role, you should have at least 10 years of experience in enterprise API integration, cloud architecture, and data management. Deep expertise in AWS services, Snowflake, Oracle ERP, and Salesforce integrations is required, along with a proven track record of delivering scalable API frameworks and handling complex middleware systems. Strong problem-solving skills, familiarity with containerization technologies, and experience in retail or e-commerce industries are also desirable. Your key responsibilities will include leading the design and implementation of reusable API frameworks, optimizing data flow through middleware systems, building robust security frameworks, and collaborating with the in-house team for seamless integration between systems. Ongoing support, monitoring, and optimization post-go-live will also be part of your role.,
Posted 1 month ago
5.0 - 10.0 years
20 - 35 Lacs
Kochi, Bengaluru
Work from Office
Job Summary: We are seeking a highly skilled and motivated Machine Learning Engineer with a strong foundation in programming and machine learning, hands-on experience with AWS Machine Learning services (especially SageMaker), and a solid understanding of Data Engineering and MLOps practices. You will be responsible for designing, developing, deploying, and maintaining scalable ML solutions in a cloud-native environment. Key Responsibilities: • Design and implement machine learning models and pipelines using AWS SageMaker and related services. • Develop and maintain robust data pipelines for training and inference workflows. • Collaborate with data scientists, engineers, and product teams to translate business requirements into ML solutions. • Implement MLOps best practices including CI/CD for ML, model versioning, monitoring, and retraining strategies. • Optimize model performance and ensure scalability and reliability in production environments. • Monitor deployed models for drift, performance degradation, and anomalies. • Document processes, architectures, and workflows for reproducibility and compliance. Required Skills & Qualifications: • Strong programming skills in Python and familiarity with ML libraries (e.g., scikitlearn, TensorFlow, PyTorch). • Solid understanding of machine learning algorithms, model evaluation, and tuning. • Hands-on experience with AWS ML services, especially SageMaker, S3, Lambda, Step Functions, and CloudWatch. • Experience with data engineering tools (e.g., Apache Airflow, Spark, Glue) and workflow orchestration. Machine Learning Engineer - Job Description • Proficiency in MLOps tools and practices (e.g., MLflow, Kubeflow, CI/CD pipelines, Docker, Kubernetes). • Familiarity with monitoring tools and logging frameworks for ML systems. • Excellent problem-solving and communication skills. Preferred Qualifications: • AWS Certification (e.g., AWS Certified Machine Learning Specialty). • Experience with real-time inference and streaming data. • Knowledge of data governance, security, and compliance in ML systems
Posted 1 month ago
3.0 - 7.0 years
0 Lacs
hyderabad, telangana
On-site
Tezo is a new generation Digital & AI solutions provider, with a history of creating remarkable outcomes for our customers. We bring exceptional experiences using cutting-edge analytics, data proficiency, technology, and digital excellence. Job Overview The AWS Architect with Data Engineering Skills will be responsible for designing, implementing, and managing scalable, robust, and secure cloud infrastructure and data solutions on AWS. This role requires a deep understanding of AWS services, data engineering best practices, and the ability to translate business requirements into effective technical solutions. Key Responsibilities Architecture Design: Design and architect scalable, reliable, and secure AWS cloud infrastructure. Develop and maintain architecture diagrams, documentation, and standards. Data Engineering Design and implement ETL pipelines using AWS services such as Glue, Lambda, and Step Functions. Build and manage data lakes and data warehouses using AWS services like S3, Redshift, and Athena. Ensure data quality, data governance, and data security across all data platforms. AWS Services Management Utilize a wide range of AWS services (EC2, S3, RDS, Lambda, DynamoDB, etc.) to support various workloads and applications. Implement and manage CI/CD pipelines using AWS CodePipeline, CodeBuild, and CodeDeploy. Monitor and optimize the performance, cost, and security of AWS resources. Collaboration And Communication Work closely with cross-functional teams including software developers, data scientists, and business stakeholders. Provide technical guidance and mentorship to team members on best practices in AWS and data engineering. Security And Compliance Ensure that all cloud solutions follow security best practices and comply with industry standards and regulations. Implement and manage IAM policies, roles, and access controls. Innovation And Improvement Stay up to date with the latest AWS services, features, and best practices. Continuously evaluate and improve existing systems, processes, and architectures. (ref:hirist.tech),
Posted 1 month ago
0.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Ready to shape the future of work At Genpact, we don&rsquot just adapt to change&mdashwe drive it. AI and digital innovation are redefining industries, and we&rsquore leading the charge. Genpact&rsquos , our industry-first accelerator, is an example of how we&rsquore scaling advanced technology solutions to help global enterprises work smarter, grow faster, and transform at scale. From large-scale models to , our breakthrough solutions tackle companies most complex challenges. If you thrive in a fast-moving, tech-driven environment, love solving real-world problems, and want to be part of a team that&rsquos shaping the future, this is your moment. Genpact (NYSE: G) is an advanced technology services and solutions company that delivers lasting value for leading enterprises globally. Through our deep business knowledge, operational excellence, and cutting-edge solutions - we help companies across industries get ahead and stay ahead. Powered by curiosity, courage, and innovation , our teams implement data, technology, and AI to create tomorrow, today. Get to know us at and on , , , and . Inviting applications for the role of Senior Principal consultant- Data Engineer In this role, we are looking for candidates who have relevant years of experience in designing and developing machine learning and deep learning system . Who have professional software development experience. Hands on running machine learning tests and experiments. Implementing appropriate ML algorithms engineers. Responsibilities Drive the vision for modern data and analytics platform to deliver well architected and engineered data and analytics products leveraging cloud tech stack and third-party products Close the gap between ML research and production to create ground-breaking new products, features and solve problems for our customers Design, develop, test, and deploy data pipelines, machine learning infrastructure and client-facing products and services Build and implement machine learning models and prototype solutions for proof-of-concept Scale existing ML models into production on a variety of cloud platforms Analyze and resolve architectural problems, working closely with engineering, data science and operations teams. Design and develop data pipelines: Create efficient data pipelines to collect, process, and store large volumes of data from various sources. Implement data solutions: Develop and implement scalable data solutions using technologies like Hadoop, Spark, and SQL databases. Ensure data quality: Monitor and improve data quality by implementing validation processes and error handling. Collaborate with teams: Work closely with data scientists, analysts, and business stakeholders to understand data requirements and deliver solutions. Optimize performance: Continuously optimize data systems for performance, scalability, and cost-effectiveness. Experience in GenAI project Qualifications we seek in you! Minimum Qualifications / Skills Bachelor%27s degree in computer science engineering, information technology or BSc in Computer Science, Mathematics or similar field Master&rsquos degree is a plus Integration - APIs, micro- services and ETL/ELT patterns DevOps (Good to have) - Ansible, Jenkins, ELK Containerization - Docker, Kubernetes etc Orchestration - Airflow, Step Functions, Ctrl M etc Languages and scripting: Python, Scala Java etc Cloud Services - AWS, GCP, Azure and Cloud Native Analytics and ML tooling - Sagemaker , ML Studio Execution Paradigm - low latency/Streaming, batch Preferred Qualifications/ Skills Data platforms - Big Data (Hadoop, Spark, Hive, Kafka etc.) and Data Warehouse (Teradata, Redshift, BigQuery , Snowflake etc.) Visualization Tools - PowerBI , Tableau Why join Genpact Be a transformation leader - Work at the cutting edge of AI, automation, and digital innovation Make an impact - Drive change for global enterprises and solve business challenges that matter Accelerate your career - Get hands-on experience, mentorship, and continuous learning opportunities Work with the best - Join 140,000+ bold thinkers and problem-solvers who push boundaries every day Thrive in a values-driven culture - Our courage, curiosity, and incisiveness - built on a foundation of integrity and inclusion - allow your ideas to fuel progress Come join the tech shapers and growth makers at Genpact and take your career in the only direction that matters: Up. Let&rsquos build tomorrow together. Genpact is an Equal Opportunity Employer and considers applicants for all positions without regard to race, color , religion or belief, sex, age, national origin, citizenship status, marital status, military/veteran status, genetic information, sexual orientation, gender identity, physical or mental disability or any other characteristic protected by applicable laws. Genpact is committed to creating a dynamic work environment that values respect and integrity, customer focus, and innovation. Furthermore, please do note that Genpact does not charge fees to process job applications and applicants are not required to pay to participate in our hiring process in any other way. Examples of such scams include purchasing a %27starter kit,%27 paying to apply, or purchasing equipment or training.
Posted 1 month ago
0.0 years
0 Lacs
Kolkata, West Bengal, India
On-site
Ready to shape the future of work At Genpact, we don&rsquot just adapt to change&mdashwe drive it. AI and digital innovation are redefining industries, and we&rsquore leading the charge. Genpact&rsquos , our industry-first accelerator, is an example of how we&rsquore scaling advanced technology solutions to help global enterprises work smarter, grow faster, and transform at scale. From large-scale models to , our breakthrough solutions tackle companies most complex challenges. If you thrive in a fast-moving, tech-driven environment, love solving real-world problems, and want to be part of a team that&rsquos shaping the future, this is your moment. Genpact (NYSE: G) is an advanced technology services and solutions company that delivers lasting value for leading enterprises globally. Through our deep business knowledge, operational excellence, and cutting-edge solutions - we help companies across industries get ahead and stay ahead. Powered by curiosity, courage, and innovation , our teams implement data, technology, and AI to create tomorrow, today. Get to know us at and on , , , and . Inviting applications for the role of Principal Consultant, AI Engineer! In this role, we are looking for candidates who have relevant years of experience in designing and developing machine learning and deep learning system . Who have professional software development experience. Hands on running machine learning tests and experiments. Implementing appropriate AI engineers with GenAI . Responsibilities Drive the vision for modern data and analytics platform to deliver well architected and engineered data and analytics products leveraging cloud tech stack and third-party products Close the gap between AI research and production to create ground-breaking new products, features and solve problems for our customers with GenAI Design, develop, test, and deploy data pipelines, machine learning infrastructure and client-facing products and services Build and implement machine learning models and prototype solutions for proof-of-concept Scale existing AI models into production on a variety of cloud platforms with GenAI Analyze and resolve architectural problems, working closely with engineering, data science and operations teams Qualifications we seek in you! Minimum Qualifications / Skills Bachelor%27s degree in computer science engineering, information technology or BSc in Computer Science, Mathematics or similar field Master&rsquos degree is a plus Integration - APIs, micro- services and ETL/ELT patterns DevOps (Good to have) - Ansible, Jenkins, ELK Containerization - Docker, Kubernetes etc Orchestration - Airflow, Step Functions, Ctrl M etc Languages and scripting: Python, Scala Java etc Cloud Services - AWS, GCP, Azure and Cloud Native Analytics and AI tooling - Sagemaker , GenAI Execution Paradigm - low latency/Streaming, batch Ensure GenAI outputs are contextually relevant, Familiarity with Generative AI technologies, Design and Implement GenAI Solutions Collaborate with service line teams to design, implement and manage Gen-AI solution Preferred Qualifications/ Skills Data platforms - Big Data (Hadoop, Spark, Hive, Kafka etc.) and Data Warehouse (Teradata, Redshift, BigQuery , Snowflake etc.) AI and GenAI Tools Certifications in AI/ML or GenAI Familiarity with generative models, prompt engineering, and fine-tuning techniques to develop innovative AI solutions. Designing, developing, and implementing solutions tailored to meet client needs. Understanding business requirements and translating them into technical solutions using GEN AI Why join Genpact Be a transformation leader - Work at the cutting edge of AI, automation, and digital innovation Make an impact - Drive change for global enterprises and solve business challenges that matter Accelerate your career - Get hands-on experience, mentorship, and continuous learning opportunities Work with the best - Join 140,000+ bold thinkers and problem-solvers who push boundaries every day Thrive in a values-driven culture - Our courage, curiosity, and incisiveness - built on a foundation of integrity and inclusion - allow your ideas to fuel progress Come join the tech shapers and growth makers at Genpact and take your career in the only direction that matters: Up. Let&rsquos build tomorrow together. Genpact is an Equal Opportunity Employer and considers applicants for all positions without regard to race, color , religion or belief, sex, age, national origin, citizenship status, marital status, military/veteran status, genetic information, sexual orientation, gender identity, physical or mental disability or any other characteristic protected by applicable laws. Genpact is committed to creating a dynamic work environment that values respect and integrity, customer focus, and innovation. Furthermore, please do note that Genpact does not charge fees to process job applications and applicants are not required to pay to participate in our hiring process in any other way. Examples of such scams include purchasing a %27starter kit,%27 paying to apply, or purchasing equipment or training.
Posted 1 month ago
4.0 - 6.0 years
0 Lacs
Mumbai, Maharashtra, India
On-site
Job Title: AWS Developer About the Company/Team Oracle FSGIU's Finergy division is a specialized team dedicated to the Banking, Financial Services, and Insurance (BFSI) industry, offering deep domain expertise and innovative solutions. With a focus on accelerated implementation, Finergy helps financial institutions rapidly deploy multi-channel platforms, ensuring an exceptional customer experience. Our team provides end-to-end banking solutions, leveraging integrated analytics and dashboards for improved efficiency. Finergy's consulting services offer strategic guidance, aligning technology investments with business objectives. Job Summary We are on the lookout for a skilled AWS Developer with 4-6 years of experience to design and build cutting-edge applications on the Amazon Web Services (AWS) platform. The ideal candidate will have hands-on expertise in developing serverless and containerized applications, integrating various AWS services, and ensuring the performance, security, and scalability of cloud-native solutions. Key Responsibilities Design and develop scalable applications using AWS Lambda, API Gateway, and other AWS services, focusing on serverless architecture. Build and manage RESTful APIs, integrating with Amazon DynamoDB, RDS, and S3 for data storage and management. Implement Infrastructure as Code (IaC) using CloudFormation or Terraform to provision and manage AWS resources. Set up and maintain CI/CD pipelines using AWS CodePipeline, CodeBuild, and CodeDeploy for efficient software delivery. Automate workflows and background processes using Step Functions, SQS, and SNS for enhanced application functionality. Utilize CloudWatch, X-Ray, and CloudTrail for logging, monitoring, and troubleshooting, ensuring application health. Implement security measures using IAM roles, KMS, and Secrets Manager to protect sensitive data. Collaborate closely with DevOps, testers, and product owners in an Agile environment to deliver high-quality solutions. Qualifications & Skills Mandatory: 4-6 years of software development experience, including at least 2 years in AWS development. Proficiency in Node.js, Python, or Java for backend development. In-depth knowledge of AWS services: Lambda, API Gateway, S3, DynamoDB, RDS, IAM, SNS/SQS. Hands-on experience with CI/CD pipelines and version control systems like Git, GitHub, or Bitbucket. Understanding of containerization: Docker, and familiarity with Amazon ECS or EKS. Scripting skills using Bash, Python, or AWS CLI for automation. Awareness of cloud security best practices, cost optimization techniques, and performance tuning. Good-to-Have: AWS certification: AWS Certified Developer - Associate or AWS Certified Solutions Architect - Associate. Experience with microservices, serverless computing, and event-driven architecture. Exposure to multi-cloud or hybrid cloud environments. Strong communication and collaboration skills, with a problem-solving mindset. Self-Assessment Questions: Describe a serverless application you developed on AWS. What services did you use, and how did you ensure scalability and security Explain your experience with CI/CD pipelines on AWS. How have you utilized CodePipeline, CodeBuild, and CodeDeploy to automate the deployment process Share your approach to monitoring and troubleshooting AWS-based applications. What tools do you use, and how do you identify and resolve issues Discuss a scenario where you implemented security measures using AWS IAM and other security services. Job Title: AWS Developer About the Company/Team Oracle FSGIU's Finergy division is a specialized team dedicated to the Banking, Financial Services, and Insurance (BFSI) industry, offering deep domain expertise and innovative solutions. With a focus on accelerated implementation, Finergy helps financial institutions rapidly deploy multi-channel platforms, ensuring an exceptional customer experience. Our team provides end-to-end banking solutions, leveraging integrated analytics and dashboards for improved efficiency. Finergy's consulting services offer strategic guidance, aligning technology investments with business objectives. Job Summary We are on the lookout for a skilled AWS Developer with 4-6 years of experience to design and build cutting-edge applications on the Amazon Web Services (AWS) platform. The ideal candidate will have hands-on expertise in developing serverless and containerized applications, integrating various AWS services, and ensuring the performance, security, and scalability of cloud-native solutions. Key Responsibilities Design and develop scalable applications using AWS Lambda, API Gateway, and other AWS services, focusing on serverless architecture. Build and manage RESTful APIs, integrating with Amazon DynamoDB, RDS, and S3 for data storage and management. Implement Infrastructure as Code (IaC) using CloudFormation or Terraform to provision and manage AWS resources. Set up and maintain CI/CD pipelines using AWS CodePipeline, CodeBuild, and CodeDeploy for efficient software delivery. Automate workflows and background processes using Step Functions, SQS, and SNS for enhanced application functionality. Utilize CloudWatch, X-Ray, and CloudTrail for logging, monitoring, and troubleshooting, ensuring application health. Implement security measures using IAM roles, KMS, and Secrets Manager to protect sensitive data. Collaborate closely with DevOps, testers, and product owners in an Agile environment to deliver high-quality solutions. Qualifications & Skills Mandatory: 4-6 years of software development experience, including at least 2 years in AWS development. Proficiency in Node.js, Python, or Java for backend development. In-depth knowledge of AWS services: Lambda, API Gateway, S3, DynamoDB, RDS, IAM, SNS/SQS. Hands-on experience with CI/CD pipelines and version control systems like Git, GitHub, or Bitbucket. Understanding of containerization: Docker, and familiarity with Amazon ECS or EKS. Scripting skills using Bash, Python, or AWS CLI for automation. Awareness of cloud security best practices, cost optimization techniques, and performance tuning. Good-to-Have: AWS certification: AWS Certified Developer - Associate or AWS Certified Solutions Architect - Associate. Experience with microservices, serverless computing, and event-driven architecture. Exposure to multi-cloud or hybrid cloud environments. Strong communication and collaboration skills, with a problem-solving mindset. Self-Assessment Questions: Describe a serverless application you developed on AWS. What services did you use, and how did you ensure scalability and security Explain your experience with CI/CD pipelines on AWS. How have you utilized CodePipeline, CodeBuild, and CodeDeploy to automate the deployment process Share your approach to monitoring and troubleshooting AWS-based applications. What tools do you use, and how do you identify and resolve issues Discuss a scenario where you implemented security measures using AWS IAM and other security services. Career Level - IC2
Posted 1 month ago
4.0 - 8.0 years
20 - 25 Lacs
Bengaluru
Hybrid
Job Title: AWS Engineer Experience: 4 - 8 Years Location: Bengaluru (Hybrid 2- 3 Days Onsite per Week) Employment Type: Full-Time Notice Period: Only Immediate to 15 Days Joiners Preferred Job Description: We are looking for an experienced AWS Engineer to join our dynamic data engineering team. The ideal candidate will have hands-on experience building and maintaining robust, scalable data pipelines and cloud-based architectures on AWS. Key Responsibilities: Design, develop, and maintain scalable data pipelines using AWS services such as Glue, Lambda, S3, Redshift, and EMR Collaborate with data scientists and ML engineers to operationalize machine learning models using AWS SageMaker Implement efficient data transformation and feature engineering workflows Optimize ETL/ELT processes and enforce best practices for data quality and governance Work with structured and unstructured data using Amazon Athena, DynamoDB, RDS, and similar services Build and manage CI/CD pipelines for data and ML workflows using AWS CodePipeline, CodeBuild, and Step Functions Monitor data infrastructure for performance, reliability, and cost-effectiveness Ensure data security and compliance with organizational and regulatory standards Required Skills: Strong experience with AWS data and ML services Solid knowledge of ETL/ELT frameworks and data modeling Proficiency in Python, SQL, and scripting for data engineering Experience with CI/CD and DevOps practices on AWS Good understanding of data governance and compliance standards Excellent collaboration and problem-solving skills
Posted 1 month ago
8.0 - 13.0 years
14 - 24 Lacs
Hyderabad
Work from Office
Roles and Responsibilities Lead the backend development for our AI-based product, driving architectural decisions and hands on implementation. Design and develop scalable, secure, and maintainable APIs using AWS Lambda and API Gateway . Build and maintain CI/CD pipelines using AWS-native tools (CodePipeline, CodeBuild) and GitHub. Collaborate with frontend developers (React/MUI) to ensure seamless integration between frontend and backend systems. Work closely with AWS and infrastructure teams to implement best practices in performance, security, and cost optimization. Review code, provide technical guidance to junior developers, and drive high engineering standards. Participate in sprint planning, estimations, and cross-functional discussions. An Ideal Candidate would have Strong programming skills in Python , with experience building production-grade applications. Proven experience with AWS Lambda , API Gateway , and other serverless components. Deep understanding of RESTful API design and development. Hands-on experience in setting up CI/CD pipelines using AWS services and GitHub. Familiarity with event-driven architectures , cloud deployments , and security best practices. Experience in working with Agile/Scrum methodologies. Strong communication and leadership skills to coordinate across cross-functional teams. Good-to-Have: Exposure to AI/ML pipelines, vector databases, or model-serving workflows. Experience with AWS Step Functions , DynamoDB , S3 , CloudWatch , and CloudFormation . Knowledge of observability tools (e.g., X-Ray, Prometheus, Grafana). Familiarity with frontend architecture and integration patterns. Experience: 8+ years with at least 2 years in a lead capacity Location: Hyderabad, India Role: Full Time Salary: Competitive
Posted 1 month ago
6.0 - 11.0 years
27 - 35 Lacs
Chennai, Bengaluru, Mumbai (All Areas)
Hybrid
Job Title: Senior Python Backend Developer AWS Serverless & Event-Driven Architecture Job Description: We are seeking an experienced Python Backend Developer with expertise in asynchronous programming and AWS serverless architecture to design and develop scalable, event-driven microservices. Key Responsibilities: Develop APIs using FastAPI, Flask, or Django (async views) Design and implement event-driven microservices using AWS Lambda, API Gateway, DynamoDB (GSI/LSI), EventBridge, Step Functions, SNS, and SQS Apply API standards with Pydantic, OAuth2/JWT, and rate limiting Build resilient, idempotent services with observability using AWS X-Ray, CloudWatch, DLQs, and retries Optimize DynamoDB schemas, TTLs, and streams Requirements: 4+ years of backend development experience with Python Strong expertise in AWS Serverless stack
Posted 2 months ago
0.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Genpact (NYSE: G) is a global professional services and solutions firm delivering outcomes that shape the future. Our 125,000+ people across 30+ countries are driven by our innate curiosity, entrepreneurial agility, and desire to create lasting value for clients. Powered by our purpose - the relentless pursuit of a world that works better for people - we serve and transform leading enterprises, including the Fortune Global 500, with our deep business and industry knowledge, digital operations services, and expertise in data, technology, and AI. Inviting applications for the role of Senior Principal Consultant, AIML Engineer! In this role, we are looking for candidates who have relevant years of experience in designing and developing machine learning and deep learning system. Who have professional software development experience. Hands on running machine learning tests and experiments. Implementing appropriate ML algorithms engineers. Responsibilities . Drive the vision for modern data and analytics platform to deliver well architected and engineered data and analytics products leveraging cloud tech stack and third-party products . Close the gap between ML research and production to create ground-breaking new products, features and solve problems for our customers . Design, develop, test, and deploy data pipelines, machine learning infrastructure and client-facing products and services . Build and implement machine learning models and prototype solutions for proof-of-concept . Scale existing ML models into production on a variety of cloud platforms . Analyze and resolve architectural problems, working closely with engineering, data science and operations teams Qualifications we seek in you! Minimum Qualifications / Skills . Bachelor%27s degree in computer science engineering, information technology or BSc in Computer Science, Mathematics or similar field . Master&rsquos degree is a plus . Integration - APIs, micro-services and ETL/ELT patterns . DevOps (Good to have) - Ansible, Jenkins, ELK . Containerization - Docker, Kubernetes etc . Orchestration - Airflow, Step Functions, Ctrl M etc . Languages and scripting: Python, Scala Java etc . Cloud Services - AWS, GCP, Azure and Cloud Native . Analytics and ML tooling - Sagemaker, ML Studio . Execution Paradigm - low latency/Streaming, batch Preferred Qualifications/ Skills . Data platforms - Big Data (Hadoop, Spark, Hive, Kafka etc.) and Data Warehouse (Teradata, Redshift, BigQuery, Snowflake etc.) . Visualization Tools - PowerBI, Tableau Genpact is an Equal Opportunity Employer and considers applicants for all positions without regard to race, color, religion or belief, sex, age, national origin, citizenship status, marital status, military/veteran status, genetic information, sexual orientation, gender identity, physical or mental disability or any other characteristic protected by applicable laws. Genpact is committed to creating a dynamic work environment that values respect and integrity, customer focus, and innovation. Get to know us at genpact.com and on LinkedIn, X, YouTube, and Facebook. Furthermore, please do note that Genpact does not charge fees to process job applications and applicants are not required to pay to participate in our hiring process in any other way. Examples of such scams include purchasing a %27starter kit,%27 paying to apply, or purchasing equipment or training.
Posted 2 months ago
5.0 - 10.0 years
20 - 30 Lacs
Pune, Chennai, Bengaluru
Work from Office
Mandatory keyskills : Athena, Step Functions, Spark - Pyspark, ETL Fundamentals, SQL (Basic + Advanced), Glue, Python, Lambda, Data Warehousing, EBS /EFS, AWS EC2, Lake Formation, Aurora, S3, Modern Data Platform Fundamentals, PLSQL, Cloud front We are looking for an experienced AWS Data Engineer to design, build, and manage robust, scalable, and high-performance data pipelines and data platforms on AWS. The ideal candidate will have a strong foundation in ETL fundamentals, data modeling, and modern data architecture, with hands-on expertise across a broad spectrum of AWS services including Athena, Glue, Step Functions, Lambda, S3, and Lake Formation. Key Responsibilities : Design and implement scalable ETL/ELT pipelines using AWS Glue, Spark (PySpark), and Step Functions. Work with structured and semi-structured data using Athena, S3, and Lake Formation to enable efficient querying and access control. Develop and deploy serverless data processing solutions using AWS Lambda and integrate them into pipeline orchestration. Perform advanced SQL and PL/SQL development for data transformation, analysis, and performance tuning. Build data lakes and data warehouses using S3, Aurora, and Athena. Implement data governance, security, and access control strategies using AWS tools including Lake Formation, CloudFront, EBS/EFS, and IAM. Develop and maintain metadata, lineage, and data cataloging capabilities. Participate in data modeling exercises for both OLTP and OLAP environments. Work closely with data scientists, analysts, and business stakeholders to understand data requirements and deliver actionable insights. Monitor, debug, and optimize data pipelines for reliability and performance. Required Skills & Experience: Strong experience with AWS data services: Glue, Athena, Step Functions, Lambda, Lake Formation, S3, EC2, Aurora, EBS/EFS, CloudFront. Proficient in PySpark, Python, SQL (basic and advanced), and PL/SQL. Solid understanding of ETL/ELT processes and data warehousing concepts. Familiarity with modern data platform fundamentals and distributed data processing. Experience in data modeling (conceptual, logical, physical) for analytical and operational use cases. Experience with orchestration and workflow management tools within AWS. Strong debugging and performance tuning skills across the data stack.
Posted 2 months ago
5.0 - 10.0 years
22 - 37 Lacs
Pune, Gurugram, Bengaluru
Hybrid
Experience: 5-8 Years (Lead-23 LPA), 8-10 Years (Senior Lead 35 LPA), 10+ Years (Architect- 42 LPA)- Max Location : Bangalore as 1 st preference , We can also go for Hyderabad, Chennai, Pune, Gurgaon Notice: Immediate to max 15 Days Joiner Mode of Work: Hybrid Job Description: Athena, Step Functions, Spark - Pyspark, ETL Fundamentals, SQL (Basic + Advanced), Glue, Python, Lambda, Data Warehousing, EBS /EFS, AWS EC2, Lake Formation, Aurora, S3, Modern Data Platform Fundamentals, PLSQL, Cloud front We are looking for an experienced AWS Data Engineer to design, build, and manage robust, scalable, and high-performance data pipelines and data platforms on AWS. The ideal candidate will have a strong foundation in ETL fundamentals, data modeling, and modern data architecture, with hands-on expertise across a broad spectrum of AWS services including Athena, Glue, Step Functions, Lambda, S3, and Lake Formation. Key Responsibilities: Design and implement scalable ETL/ELT pipelines using AWS Glue, Spark (PySpark), and Step Functions. Work with structured and semi-structured data using Athena, S3, and Lake Formation to enable efficient querying and access control. Develop and deploy serverless data processing solutions using AWS Lambda and integrate them into pipeline orchestration. Perform advanced SQL and PL/SQL development for data transformation, analysis, and performance tuning. Build data lakes and data warehouses using S3, Aurora, and Athena. Implement data governance, security, and access control strategies using AWS tools including Lake Formation, CloudFront, EBS/EFS, and IAM. Develop and maintain metadata, lineage, and data cataloging capabilities. Participate in data modeling exercises for both OLTP and OLAP environments. Work closely with data scientists, analysts, and business stakeholders to understand data requirements and deliver actionable insights. Monitor, debug, and optimize data pipelines for reliability and performance. Required Skills & Experience: Strong experience with AWS data services: Glue, Athena, Step Functions, Lambda, Lake Formation, S3, EC2, Aurora, EBS/EFS, CloudFront. Proficient in PySpark, Python, SQL (basic and advanced), and PL/SQL. Solid understanding of ETL/ELT processes and data warehousing concepts. Familiarity with modern data platform fundamentals and distributed data processing. Experience in data modeling (conceptual, logical, physical) for analytical and operational use cases. Experience with orchestration and workflow management tools within AWS. Strong debugging and performance tuning skills across the data stack.
Posted 2 months ago
3.0 - 8.0 years
5 - 10 Lacs
Bengaluru
Work from Office
The Core AI BI & Data Platforms Team has been established to create, operate and run the Enterprise AI, BI and Data that facilitate the time to market for reporting, analytics and data science teams to run experiments, train models and generate insights as well as evolve and run the CoCounsel application and its shared capability of CoCounsel AI Assistant. The Enterprise Data Platform aims to provide self service capabilities for fast and secure ingestion and consumption of data across TR. At Thomson Reuters, we are recruiting a team of motivated Cloud professionals to transform how we build, manage and leverage our data assets. The Data Platform team in Bangalore is seeking an experienced Software Engineer with a passion for engineering cloud-based data platform systems. Join our dynamic team as a Software Engineer and take a pivotal role in shaping the future of our Enterprise Data Platform. You will develop and implement data processing applications and frameworks on cloud-based infrastructure, ensuring the efficiency, scalability, and reliability of our systems. About the Role In this opportunity as the Software Engineer, you will: Develop data processing applications and frameworks on cloud-based infrastructure in partnership withData Analysts and Architects with guidance from Lead Software Engineer. Innovatewithnew approaches to meet data management requirements. Make recommendations about platform adoption, including technology integrations, application servers, libraries, and AWS frameworks, documentation, and usability by stakeholders. Contribute to improving the customer experience. Participate in code reviews to maintain a high-quality codebase Collaborate with cross-functional teams to define, design, and ship new features Work closely with product owners, designers, and other developers to understand requirements and deliver solutions. Effectively communicate and liaise across the data platform & management teams Stay updated on emerging trends and technologies in cloud computing About You You're a fit for the role of Software Engineer, if you meet all or most of these criteria: Bachelor's degree in Computer Science, Engineering, or a related field 3+ years of relevant experience in Implementation of data lake and data management of data technologies for large scale organizations. Experience in building & maintaining data pipelines with excellent run-time characteristics such as low-latency, fault-tolerance and high availability. Proficient in Python programming language. Experience in AWS services and management, including Serverless, Container, Queueing and Monitoring services like Lambda, ECS, API Gateway, RDS, Dynamo DB, Glue, S3, IAM, Step Functions, CloudWatch, SQS, SNS. Good knowledge in Consuming and building APIs Business Intelligence tools like PowerBI Fluency in querying languages such as SQL Solid understanding in Software development practicessuch as version control via Git, CI/CD and Release management Agile development cadence Good critical thinking, communication, documentation, troubleshooting and collaborative skills.
Posted 2 months ago
3.0 - 8.0 years
5 - 10 Lacs
Bengaluru
Work from Office
The Core AI BI & Data Platforms Team has been established to create, operate and run the Enterprise AI, BI and Data that facilitate the time to market for reporting, analytics and data science teams to run experiments, train models and generate insights as well as evolve and run the CoCounsel application and its shared capability of CoCounsel AI Assistant. The Enterprise Data Platform aims to provide self service capabilities for fast and secure ingestion and consumption of data across TR. At Thomson Reuters, we are recruiting a team of motivated Cloud professionals to transform how we build, manage and leverage our data assets. The Data Platform team in Bangalore is seeking an experienced Software Engineer with a passion for engineering cloud-based data platform systems. Join our dynamic team as a Software Engineer and take a pivotal role in shaping the future of our Enterprise Data Platform. You will develop and implement data processing applications and frameworks on cloud-based infrastructure, ensuring the efficiency, scalability, and reliability of our systems. About the Role In this opportunity as the Software Engineer, you will: Develop data processing applications and frameworks on cloud-based infrastructure in partnership with Data Analysts and Architects with guidance from Lead Software Engineer. Innovatewithnew approaches to meet data management requirements. Make recommendations about platform adoption, including technology integrations, application servers, libraries, and AWS frameworks, documentation, and usability by stakeholders. Contribute to improving the customer experience. Participate in code reviews to maintain a high-quality codebase Collaborate with cross-functional teams to define, design, and ship new features Work closely with product owners, designers, and other developers to understand requirements and deliver solutions. Effectively communicate and liaise across the data platform & management teams Stay updated on emerging trends and technologies in cloud computing About You You're a fit for the role of Software Engineer, if you meet all or most of these criteria: Bachelor's degree in Computer Science, Engineering, or a related field 3+ years of relevant experience in Implementation of data lake and data management of data technologies for large scale organizations. Experience in building & maintaining data pipelines with excellent run-time characteristics such as low-latency, fault-tolerance and high availability. Proficient in Python programming language. Experience in AWS services and management, including Serverless, Container, Queueing and Monitoring services like Lambda, ECS, API Gateway, RDS, Dynamo DB, Glue, S3, IAM, Step Functions, CloudWatch, SQS, SNS. Good knowledge in Consuming and building APIs Business Intelligence tools like PowerBI Fluency in querying languages such as SQL Solid understanding in Software development practicessuch as version control via Git, CI/CD and Release management Agile development cadence Good critical thinking, communication, documentation, troubleshooting and collaborative skills.
Posted 2 months ago
6.0 - 8.0 years
20 - 22 Lacs
Alappuzha / Alleppey, Kerala, India
On-site
Job Title: Senior Python Developer (AWS & PostgreSQL) Experience:6+ years (4+ years inPython + PostgreSQL + AWS Lambda) Work Location:Trivandrum/Kochi Budget:Up to22 LPA Notice Period:Immediate Joiners Only Mandatory Skills: ?Python(FastAPI/Flask/Django) ?PostgreSQL(PL/pgSQL, Query Optimization, AWS Aurora) ?AWS Serverless Stack(Lambda, API Gateway, Step Functions, S3, Cognito) ?CI/CD & IaC(Terraform, Git/Bitbucket) ?RESTful APIs(JSON, Web API Design) ?Database Architecture(Indexing, Performance Tuning, Data Modeling) Job Purpose: As aSenior Python Developer, you will architect, develop, and deployhigh-performance backend solutionsusing Python, PostgreSQL, and AWS serverless technologies. You will lead technical initiatives, mentor junior developers, and ensure compliance with enterprise architecture standards. Key Responsibilities: 1. Backend Development & Architecture Design and buildscalable Python applications(FastAPI/Flask/Django). DevelopRESTful APIsintegrated withAWS Lambda, API Gateway, and Step Functions. OptimizePostgreSQL/Auroraqueries, stored procedures, and triggers (PL/pgSQL). 2. AWS Cloud Solutions Implement serverless architectures usingLambda, S3, Cognito, and DynamoDB. Automate deployments withTerraform (IaC)and CI/CD pipelines. 3. Database & Performance Engineering Fine-tuneSQL queries, indexing strategies, and execution plans. Ensurehigh availabilityand disaster recovery for PostgreSQL/Aurora. 4. Leadership & Collaboration Lead a team of developers, providingtechnical guidance and code reviews. Collaborate with cross-functional teams to refinefunctional/non-functional requirements. 5. Compliance & Best Practices Adhere tosecurity policies(data encryption, IAM, Cognito auth). Advocate forclean code, unit testing, and DevOps practices. Required Competencies: ?Expertise in Python & PostgreSQL: StrongPL/pgSQLskills (procedures, functions, triggers). Experience withORM tools(SQLAlchemy, Django ORM). ?AWS Serverless Proficiency: Hands-on withLambda, Step Functions, API Gateway, S3. Knowledge ofCognito for authandDynamoDB for NoSQL. ?DevOps & Automation: CI/CD implementation (Jenkins/GitHub Actions). Infrastructure-as-Code (Terraform/CloudFormation). ?Soft Skills: Ability to workautonomouslywith minimal supervision. Excellentcommunication(verbal/written) forUS stakeholder collaboration. Preferred Qualifications: Bachelor's degree in Computer Science or related field. AWS Certified Developer/Solutions Architect. Exposure toUtility/Metering Domain(e.g., smart energy, IoT)
Posted 2 months ago
6.0 - 12.0 years
0 - 22 Lacs
Thiruvananthapuram / Trivandrum, Kerala, India
Remote
Mandate Skill set Python, AWS Services, PostgreSQL, AWS Aurora, Lambda, AWS API Gateway, AWS Step functions Max Budget 22 LPA Experience overall 6+ years and relevant 4+ years experiecne into -- Python + PostgreSQL + AWS (with exposure in Lambda) Notice Period IMMEDIATE ONLY Work Location Trivandrum/Kochi/Remote Job Description / Duties & Responsibilities We are seeking a Senior Python Developer with strong knowledge in PostgreSQL to join our dynamic team. The ideal candidate will have a strong background in Python programming and PostgreSQL. Sound understanding on we application development, with a focus on utilizing AWS services for building scalable and efficient solutions. Responsible for delivering senior-level innovative, compelling, coherent software solutions for our consumer,internal operations, and value chain constituents across a wide variety of enterprise applications through thecreation of discrete business services and their supporting components. ? Design, develop and deliver solutions that meet business line and enterprise requirements. ? Lead a team of Python developers, providing technical guidance, mentorship, and support in project execution. ? Participates in rapid prototyping and POC development efforts. ? Advances overall enterprise technical architecture and implementation best practices. ? Assists in efforts to develop and refine functional and non-functional requirements. ? Participates in iteration and release planning. ? Performs functional and non-functional testing. ? Informs efforts to develop and refine functional and non-functional requirements. ? Demonstrates knowledge of, adherence to, monitoring and responsibility for compliance with state andfederal regulations and laws as they pertain to this position. ? Strong ability to produce high-quality, properly functioning deliverables the first time. ? Delivers work product according to established deadlines. ? Estimates tasks with a level of granularity and accuracy commensurate with the information provided. ? Works collaboratively in a small team. ? Excels in a rapid iteration environment with short turnaround times. ? Deals positively with high levels of uncertainty, ambiguity, and shifting priorities. ? Accepts a wide variety of tasks and pitches in wherever needed. ? Constructively presents, discuss and debates alternatives. Takes shared ownership of the product. ? Communicates effectively both verbally and in writing. ? Takes direction from team leads and upper management. ? Ability to work with little to no supervision while performing duties. Job Specification / Skills and Competencies ? Architect, design, and implement high-performance and scalable Python back-end applications. ? Proficient in Python programming language to develop backend services and APIs. ? Experience with any web frameworks such as FastAPI/Flask/Django for building RESTful APIs. ? Exposure in Utility domain is an advantage (Metering Services). ? Experience in AWS services such as API Gateway, Lambda, Step functions and S3. ? Knowledge in Implementing authentication and authorization mechanisms using AWS Cognito and other relevant services. ? Good understanding on databases Including PostgreSQL, MongoDB, AWS Aurora, DynamoDB. ? Experience in automated CI/CD implementation using terraform is required. ? Deep understanding of one or more source/version control systems (GIT/Bitbucket). ? Develops branching and merging strategies. ? Working understanding of Web API, REST, JSON etc. ? Good understanding of unit test creation. ? In-depth knowledge of Oracle/PostgreSQL Database architecture and components. ? Strong SQL and PL/SQL or PL/pgSQL programming skills for querying, data manipulation, and stored procedure development. ? Proficiency in writing and maintaining PL/SQL or PL/pgSQL code, including procedures, functions, packages, and triggers. ? Strong understanding of PL/SQL or PL/pgSQL syntax, data types, and control structures. ? Mastery of SQL for querying and manipulating data in Oracle databases. ? Knowledge of complex SQL queries, joins, and subqueries. ? Familiarity with database design principles and data modeling. ? Expertise in optimizing PL/SQL or PL/pgSQL code and SQL queries for performance. ? Knowledge of query optimization techniques, indexing, and execution plan analysis. ? Ability to work with database designers to implement efficient schema structures. ? Bachelor's Degree is required, and/or a minimum of four (4) + related work experience. ? To adhere to the Information Security Management policies and procedures.
Posted 2 months ago
3.0 - 5.0 years
4 - 7 Lacs
Hyderabad
Work from Office
Data Analysis: Conduct in-depth analysis of data to identify trends, anomalies, and opportunities, utilizing SQL, AWS, and Python to extract and manipulate data. Business Transformation: Translate existing SQL queries into business transformation logics, enabling the conversion of raw data into actionable insights to drive strategic decision-making. Requirements Gathering: Collaborate with business stakeholders to gather and document. clear and concise business requirements, ensuring a thorough understanding of data needs. Documentation: Develop and maintain documentation related to data analysis, transformation, and reporting processes, ensuring knowledge transfer and continuity. AWS Integration: Leverage AWS services to facilitate data extraction, storage, and analysis, making data readily available for the business. Quality Assurance: Implement data quality checks and validation processes to ensure the accuracy and integrity of data used in analyses. Qualifications: Bachelors degree in business, Computer Science, or a related field. Proven experience as a Business Analyst with a strong focus on data analysis and transformation. Proficiency in SQL for querying and manipulating relational databases. Awareness of AWS services such as Redshift, S3, Athena, Lambda, Step Functions, AWS Batch Proficiency in Python for data analysis and scripting. Experience in converting SQL queries into actionable business transformation logics. Strong problem-solving and critical-thinking skills. Excellent communication and interpersonal skills to work effectively with cross-functional. teams and stakeholders. Attention to detail and a commitment to data accuracy and quality.
Posted 2 months ago
7.0 - 12.0 years
20 - 32 Lacs
Hyderabad
Work from Office
Lead the design and implementation of event-driven and serverless architectures using AWS Lambda, Step Functions, SNS, and SQS. Develop and integrate secure APIs with third-party services such as Salesforce API, OpenAI API, and PagerDuty. DynamoDB
Posted 2 months ago
5.0 - 10.0 years
8 - 14 Lacs
Navi Mumbai
Work from Office
Data Strategy and Planning: Develop and implement data architecture strategies that align with organizational goals and objectives. Collaborate with business stakeholders to understand data requirements and translate them into actionable plans. Data Modeling: Design and implement logical and physical data models to support business needs. Ensure data models are scalable, efficient, and comply with industry best practices. Database Design and Management: Oversee the design and management of databases, selecting appropriate database technologies based on requirements. Optimize database performance and ensure data integrity and security. Data Integration: Define and implement data integration strategies to facilitate seamless flow of information across. Responsibilities: Experience in data architecture and engineering Proven expertise with Snowflake data platform Strong understanding of ETL/ELT processes and data integration Experience with data modeling and data warehousing concepts Familiarity with performance tuning and optimization techniques Excellent problem-solving skills and attention to detail Strong communication and collaboration skills Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Cloud & Data Architecture: AWS , Snowflake ETL & Data Engineering: AWS Glue, Apache Spark, Step Functions Big Data & Analytics: Athena,Presto, Hadoop Database & Storage: SQL, Snow sql Security & Compliance: IAM, KMS, Data Masking Preferred technical and professional experience Cloud Data Warehousing: Snowflake (Data Modeling, Query Optimization) Data Transformation: DBT (Data Build Tool) for ELT pipeline management Metadata & Data Governance: Alation (Data Catalog, Lineage, Governance
Posted 2 months ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
54024 Jobs | Dublin
Wipro
24262 Jobs | Bengaluru
Accenture in India
18733 Jobs | Dublin 2
EY
17079 Jobs | London
Uplers
12548 Jobs | Ahmedabad
IBM
11704 Jobs | Armonk
Amazon
11059 Jobs | Seattle,WA
Bajaj Finserv
10656 Jobs |
Accenture services Pvt Ltd
10587 Jobs |
Oracle
10506 Jobs | Redwood City