Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
3.0 - 8.0 years
5 - 10 Lacs
Noida
Work from Office
Project Role : Application Lead Project Role Description : Lead the effort to design, build and configure applications, acting as the primary point of contact. Must have skills : AWS Glue Good to have skills : NA Minimum 3 year(s) of experience is required Educational Qualification : Graduation Summary :As an Application Lead for Software Development, you will be responsible for leading the effort to design, build, and configure applications using AWS Glue. Your typical day will involve collaborating with cross-functional teams, designing and implementing scalable and reliable solutions, and ensuring the timely delivery of projects. Roles & Responsibilities: Lead the design, development, and deployment of AWS Glue-based applications, ensuring scalability, reliability, and performance. Collaborate with cross-functional teams, including developers, architects, and business analysts, to gather requirements and design solutions that meet business needs. Act as the primary point of contact for the project, providing technical guidance and support to team members and stakeholders. Ensure the timely delivery of projects, managing project timelines, budgets, and resources effectively. Stay updated with the latest advancements in AWS Glue and related technologies, integrating innovative approaches for sustained competitive advantage. Professional & Technical Skills: Must To Have Skills:Strong experience in AWS Glue, including designing, developing, and deploying applications. Good To Have Skills:Experience with AWS services such as S3, EC2, Lambda, and CloudFormation. Solid understanding of data integration and ETL processes, including data mapping, transformation, and loading. Experience with programming languages such as Python, Java, or Scala. Strong understanding of database technologies such as SQL, NoSQL, and data warehousing. Experience with Agile methodologies and DevOps practices, including continuous integration and delivery. Additional Information: The candidate should have a minimum of 3 years of experience in AWS Glue and related technologies. The ideal candidate will possess a strong educational background in computer science, software engineering, or a related field, along with a proven track record of delivering impactful solutions. This position is based at our Mumbai office. Qualifications Graduation
Posted 4 weeks ago
6.0 - 10.0 years
0 - 2 Lacs
Gurugram
Remote
We are seeking an experienced AWS Data Engineer to join our dynamic team. The ideal candidate will have hands-on experience in building and managing scalable data pipelines on AWS, utilizing Databricks, and have a deep understanding of the Software Development Life Cycle (SDLC) and will play a critical role in enabling our data architecture, driving data quality, and ensuring the reliable and efficient flow of data throughout our systems. Required Skills: 7+ years comprehensive experience working as a Data Engineer with expertise in AWS services (S3, Glue, Lambda etc.). In-depth knowledge of Databricks, pipeline development, and data engineering. 2+ years of experience working with Databricks for data processing and analytics. Architect and Design the pipeline - e.g. delta live tables Proficient in programming languages such as Python, Scala, or Java for data engineering tasks. Experience with SQL and relational databases (e.g., PostgreSQL, MySQL). Experience with ETL/ELT tools and processes in a cloud environment. Familiarity with Big Data processing frameworks (e.g., Apache Spark). Experience with data modeling, data warehousing, and building scalable architectures. Understand/implement security aspects - consume data from different sources Preferred Qualifications: Experience with Apache Airflow or other workflow orchestration tools, Terraform , python, spark will be preferred AWS Certified Solutions Architect, AWS Certified Data Analytics Specialty, or similar certifications.
Posted 4 weeks ago
4.0 - 8.0 years
6 - 10 Lacs
Bengaluru
Work from Office
We are seeking a Lead Developer with a strong background in AWS/Azure and Python and PostgreSQL to join our dynamic team. The ideal candidate will have formal training or certification in software engineering concepts and at least 9 years of work experience. Key Responsibilities: "Must Have" Design, develop and maintain applications using AWS Postgres, Python, Glue, Lambda, and Linux. Write and optimize complex queries to support data-driven decision-making. Use source code versioning tools such as Subversion or Bitbucket and adhere to coding standards. Leverage CI/CD pipelines to rapidly build and test application code. Write and maintain scripts using Python and UNIX Shell. Desired Implement and manage Columnar database technologies like Redshift. Utilize scheduling tools such as Control M/Autosys or similar tools to manage workflows.
Posted 4 weeks ago
4.0 - 6.0 years
5 - 15 Lacs
Gurugram
Work from Office
Singleinterface is looking for a Data Engineer who can own and manage data flow automation across multiple systems, ensuring smooth, reliable, and scalable pipelines. This role involves integrating APIs, automating reporting workflows, and consolidating data into our data warehouse using tools like AWS and Google Sheets . Youll work closely with business and analytics teams to enable efficient reporting and data accessibility across the organization. Key Responsibilities Build and maintain automated data pipelines from various internal and external sources to our data warehouse.Design and implement API integrations for ingesting and updating data from third-party systems.Develop daily and monthly automation triggers to ensure timely data availability for reporting and analysis.Automate data movement and transformation using AWS services (e.g., Lambda, S3, Glue) and Google Sheets / App Scripts where applicable.Manage and optimize collation of data into the central data warehouse .Support ad hoc data requests and reporting needs from business stakeholders.Collaborate with analysts, BI developers, and business leads to improve data workflows and accessibility. Required Skills 3–5 years of experience in data engineering, automation, or ETL development.Strong experience with AWS (Lambda, S3, Glue, CloudWatch) or equivalent cloud automation tools. Experience with data warehousing concepts , ETL/ELT processes, and schema design. Proficiency in SQL and data modeling for warehousing.Scripting knowledge in Python or equivalent for automation.Experience working with REST APIs , including integration, pagination, and authentication.Familiarity with Google Sheets , App Script , or similar tools for light-weight automation.Excellent debugging, testing, and documentation habits.
Posted 4 weeks ago
6.0 - 9.0 years
10 - 17 Lacs
Pune, Chennai, Bengaluru
Work from Office
Primary Skill: AWS Glue, Lambda, Aurora, Athena Redshift,Python,Pyspark Share your updated CV to this mail id : manju.d@infyjob.com
Posted 1 month ago
5.0 - 10.0 years
10 - 14 Lacs
Gurugram
Work from Office
Project Role : Application Lead Project Role Description : Lead the effort to design, build and configure applications, acting as the primary point of contact. Must have skills : AWS Glue Good to have skills : NAMinimum 5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As an Application Lead, you will lead the effort to design, build, and configure applications, acting as the primary point of contact. Your day will involve overseeing the application development process and ensuring seamless communication among team members and stakeholders. Roles & Responsibilities:- Expected to be an SME- Collaborate and manage the team to perform- Responsible for team decisions- Engage with multiple teams and contribute on key decisions- Provide solutions to problems for their immediate team and across multiple teams- Lead the application development process- Ensure effective communication among team members and stakeholders- Implement best practices for application design and configuration Professional & Technical Skills: - Must To Have Skills: Proficiency in AWS Glue- Strong understanding of cloud computing principles- Experience with data integration and ETL processes- Knowledge of data warehousing concepts- Hands-on experience with AWS services such as S3, Lambda, and Redshift Additional Information:- The candidate should have a minimum of 5 years of experience in AWS Glue- This position is based at our Gurugram office- A 15 years full-time education is required Qualification 15 years full time education
Posted 1 month ago
7.0 - 12.0 years
10 - 14 Lacs
Gurugram
Work from Office
Project Role : Application Lead Project Role Description : Lead the effort to design, build and configure applications, acting as the primary point of contact. Must have skills : AWS Glue Good to have skills : NAMinimum 7.5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As an Application Lead, you will lead the effort to design, build, and configure applications, acting as the primary point of contact. You will oversee the development process and ensure successful project delivery. Roles & Responsibilities:- Expected to be an SME- Collaborate and manage the team to perform- Responsible for team decisions- Engage with multiple teams and contribute on key decisions- Provide solutions to problems for their immediate team and across multiple teams- Lead the application development process- Ensure timely project delivery- Provide technical guidance and support to the team Professional & Technical Skills: - Must To Have Skills: Proficiency in AWS Glue- Strong understanding of cloud computing principles- Experience with data integration and ETL processes- Hands-on experience in designing and implementing scalable applications- Knowledge of data warehousing concepts Additional Information:- The candidate should have a minimum of 7.5 years of experience in AWS Glue- This position is based at our Gurugram office- A 15 years full-time education is required Qualification 15 years full time education
Posted 1 month ago
5.0 - 8.0 years
10 - 14 Lacs
Pune
Work from Office
Project Role : Application Lead Project Role Description : Lead the effort to design, build and configure applications, acting as the primary point of contact. Must have skills : AWS Glue Good to have skills : NAMinimum 7.5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As an Application Lead, you will lead the effort to design, build, and configure applications, acting as the primary point of contact. Your typical day will involve collaborating with various teams to ensure project milestones are met, facilitating discussions to address challenges, and guiding your team in implementing effective solutions. You will also engage in strategic planning sessions to align project goals with organizational objectives, ensuring that all stakeholders are informed and involved in the decision-making process. Your role will require a balance of technical expertise and leadership skills to drive project success and foster a collaborative team environment. Roles & Responsibilities:- Expected to be an SME.- Collaborate and manage the team to perform.- Responsible for team decisions.- Engage with multiple teams and contribute on key decisions.- Provide solutions to problems for their immediate team and across multiple teams.- Facilitate knowledge sharing sessions to enhance team capabilities.- Monitor project progress and implement necessary adjustments to meet deadlines. Professional & Technical Skills: - Must To Have Skills: Proficiency in AWS Glue.- Strong understanding of data integration and ETL processes.- Experience with cloud computing platforms and services.- Familiarity with data warehousing concepts and best practices.- Ability to troubleshoot and optimize data workflows. Additional Information:- The candidate should have minimum 7.5 years of experience in AWS Glue.- This position is based in Pune.- A 15 years full time education is required. Qualification 15 years full time education
Posted 1 month ago
3.0 - 8.0 years
15 - 30 Lacs
Bengaluru
Work from Office
We are hiring skilled Backend Developers to join our technology team supporting a top-tier client in the Retirement Pension Planning and Insurance domain. You'll work on large-scale enterprise data warehouse systems and develop robust, scalable data pipelines across real-time and batch environments. Roles & Responsibilities : Design, develop, and maintain scalable backend data pipelines using AWS Glue, PySpark, Lambda, and Kinesis . Implement both batch and real-time data ingestion and transformation flows using Alteryx . Collaborate with solution architects, analysts, and business stakeholders for data modeling and integration. Optimize data workflow performance, storage, and processing across multiple datasets. Troubleshoot data pipeline issues, maintain documentation, and ensure adherence to best practices. Work in agile teams and participate in sprint planning and code reviews. Technical Skills Required Must-Have: 3+ years of experience with AWS Glue , PySpark , and AWS Lambda Hands-on experience with AWS Kinesis or Amazon MSK Proficiency in scripting using Python Experience working with data warehouses and ETL frameworks Knowledge of batch and real-time data processing with Alteryx Good-to-Have: Understanding of data lake architectures and S3-based pipelines Familiarity with CI/CD tools for cloud deployment Basic knowledge of Data Governance tools or BI platforms (Tableau/Snowflake)
Posted 1 month ago
5.0 - 7.0 years
25 - 30 Lacs
Bengaluru
Work from Office
Job Title: Senior Data Engineer / Technical Lead Location: Bangalore Employment Type: Full-time Role Summary We are seeking a highly skilled and motivated Senior Data Engineer/Technical Lead to take ownership of the end-to-end delivery of a key project involving data lake transitions, data warehouse maintenance, and enhancement initiatives. The ideal candidate will bring strong technical leadership, excellent communication skills, and hands-on expertise with modern data engineering tools and platforms. Experience in Databricks and JIRA is highly desirable. Knowledge of supply chain and finance domains is a plus, or a willingness to quickly ramp up in these areas is expected. Key Responsibilities Delivery Management Lead and manage data lake transition initiatives under the Gold framework. Oversee delivery of enhancements and defect fixes related to the enterprise data warehouse. Technical Leadership Design and develop efficient, scalable data pipelines using Python, PySpark , and SQL . Ensure adherence to coding standards, performance benchmarks, and data quality goals. Conduct performance tuning and infrastructure optimization for data solutions. Provide code reviews, mentorship, and technical guidance to the engineering team. Collaboration & Stakeholder Engagement Collaborate with business stakeholders (particularly the Laboratory Products team) to gather, interpret, and refine requirements. Communicate technical solutions and project progress clearly to both technical and non-technical audiences. Tooling and Technology Use Leverage tools such as Databricks, Informatica, AWS Glue, Google DataProc , and Airflow for ETL and data integration. Use JIRA to manage project workflows, track defects, and report progress. Documentation and Best Practices Create and review documentation including architecture, design, testing, and deployment artifacts. Define and promote reusable templates, checklists, and best practices for data engineering tasks. Domain Adaptation Apply or gain knowledge in supply chain and finance domains to enhance project outcomes and align with business needs. Skills and Qualifications Technical Proficiency Strong hands-on experience in Python, PySpark , and SQL . Expertise with ETL tools such as Informatica, AWS Glue, Databricks , and Google Cloud DataProc . Deep understanding of data warehousing solutions (e.g., Snowflake, BigQuery, Delta Lake, Lakehouse architectures ). Familiarity with performance tuning, cost optimization, and data modeling best practices. Platform & Tools Proficient in working with cloud platforms like AWS, Azure, or Google Cloud . Experience in version control and configuration management practices. Working knowledge of JIRA and Agile methodologies. Certifications (Preferred but not required) Certifications in cloud technologies, ETL platforms, or relevant domain (e.g., AWS Data Engineer, Databricks Data Engineer, Supply Chain certification). Expected Outcomes Timely and high-quality delivery of data engineering solutions. Reduction in production defects and improved pipeline performance. Increased team efficiency through reuse of components and automation. Positive stakeholder feedback and high team engagement. Consistent adherence to SLAs, security policies, and compliance guidelines. Performance Metrics Adherence to project timelines and engineering standards Reduction in post-release defects and production issues Improvement in data pipeline efficiency and resource utilization Resolution time for pipeline failures and data issues Completion of required certifications and training Preferred Background Background or exposure to supply chain or finance domains Willingness to work during morning US East hours Ability to work independently and drive initiatives with minimal oversight Required Skills Databricks,Data Warehousing,ETL,SQL
Posted 1 month ago
5.0 - 7.0 years
7 - 9 Lacs
Pune
Work from Office
New Opportunity :FullStack Engineer. Location :Pune (Onsite). Company :Apptware Solutions Hiring. Experience :4+ years. We're looking for a skilled Full Stack Engineer to join our team. If you have experience in building scalable applications and working with modern technologies, this role is for you. Role & Responsibilities. Develop product features to help customers easily transform data. Design, implement, deploy, and support client-side and server-side architectures, including web applications, CLI, and SDKs. Minimum Requirements. 4+ years of experience as a Full Stack Developer or similar role. Hands-on experience in a distributed engineering role with direct operational responsibility (on-call experience preferred). Proficiency in at least one back-end language (Node.js, TypeScript, Python, or Go). Front-end development experience with Angular or React, HTML, CSS. Strong understanding of web applications, backend APIs, CI/CD pipelines, and testing frameworks. Familiarity with NoSQL databases (e.g. DynamoDB) and AWS services (Lambda, API Gateway, Cognito, etc.). Bachelor's degree in Computer Science, Engineering, Math, or equivalent experience. Strong written and verbal communication skills. Preferred Skills. Experience with AWS Glue, Spark, or Athena. Strong understanding of SQL and data engineering best practices. Exposure to Analytical EDWs (Snowflake, Databricks, Big Query, Cloudera, Teradata). Experience in B2B applications, SaaS offerings, or startups is a plus. (ref:hirist.tech). Show more Show less
Posted 1 month ago
3.0 - 8.0 years
6 - 10 Lacs
Andhra Pradesh
Work from Office
Requirement : - 3-4 years hands-on experience on AWS, ideally SaaS in the cloud - 1-2 years of experience in working with AWS Connect, Lex Bots. - Experience developing solutions with code/scripting language must have Python experience (e.g, python, Node.js) - Experience in building, troubleshooting contact centre solutions and getting hands on. - Experience in creating and configuring AWS resources like VPC, API Gateway, CloudWatch, Athena, Glue, Cloud-Formation, EC2, Lambda, Connect, SNS, etc. - Designing highly available applications with responsibility for infrastructure robustness, including networking, communications, EC2 and storage - Application, server, and/or network security - Working knowledge of popular communications protocols and APIs such as WebRTC and SIP - Experience in working with creating Contact workflows, Routing Profile, Voice & Chat management and Callback Queue management - Good in trouble shooting, debugging and problem solving of AWS environment. - Being experience on Git source controller, and Git workflows. - Good to have knowledge on CRM integration. Skill Sets : - Being able to configure and work with Source control (like Git, bitbucket and so forth) - Have basic knowledge on Scrum and Agile and know the values of continues deployment and delivery. - Being strong in team work and willing to help teammates. - Willing to deep dive on issues in hand and come up with solutions. - Looking for continues improvement and willing to learn and try new technologies. - Being accountable and responsible and hold his/her peers accountable. - Able to read and reverse engineer existing/legacy code in timely manner. - Being able to break down complex task to smaller one.
Posted 1 month ago
5.0 - 10.0 years
20 - 27 Lacs
Hyderabad
Work from Office
Position: Experienced Data Engineer We are seeking a skilled and experienced Data Engineer to join our fast-paced and innovative Data Science team. This role involves building and maintaining data pipelines across multiple cloud-based data platforms. Requirements: A minimum of 5 years of total experience, with at least 3-4 years specifically in Data Engineering on a cloud platform. Key Skills & Experience: Proficiency with AWS services such as Glue, Redshift, S3, Lambda, RDS , Amazon Aurora ,DynamoDB ,EMR, Athena, Data Pipeline , Batch Job. Strong expertise in: SQL and Python DBT and Snowflake OpenSearch, Apache NiFi, and Apache Kafka In-depth knowledge of ETL data patterns and Spark-based ETL pipelines. Advanced skills in infrastructure provisioning using Terraform and other Infrastructure-as-Code (IaC) tools. Hands-on experience with cloud-native delivery models, including PaaS, IaaS, and SaaS. Proficiency in Kubernetes, container orchestration, and CI/CD pipelines. Familiarity with GitHub Actions, GitLab, and other leading DevOps and CI/CD solutions. Experience with orchestration tools such as Apache Airflow and serverless/FaaS services. Exposure to NoSQL databases is a plus
Posted 1 month ago
8.0 - 12.0 years
20 - 25 Lacs
Hyderabad
Work from Office
CACI International Inc is an American multinational professional services and information technology company headquartered in Northern Virginia. CACI provides expertise and technology to enterprise and mission customers in support of national security missions and government transformation for defense, intelligence, and civilian customers. CACI has approximately 23,000 employees worldwide. Headquartered in London, CACI Ltd is a wholly owned subsidiary of CACI International Inc., a publicly listed company on the NYSE with annual revenue in excess of US $6.2bn. Founded in 2022, CACI India is an exciting, growing and progressive business unit of CACI Ltd. CACI Ltd currently has over 2000 intelligent professionals and are now adding many more from our Hyderabad and Pune offices. Through a rigorous emphasis on quality, the CACI India has grown considerably to become one of the UKs most well-respected Technology centres. About Data Platform: The Data Platform will be built and managed as a Product” to support a Data Mesh organization. The Data Platform focusses on enabling decentralized management, processing, analysis and delivery of data, while enforcing corporate wide federated governance on data, and project environments across business domains. The goal is to empower multiple teams to create and manage high integrity data and data products that are analytics and AI ready, and consumed internally and externally. What does a Data Infrastructure Engineer do? A Data Infrastructure Engineer will be responsible to develop, maintain and monitor the data platform infrastructure and operations. The infrastructure and pipelines you build will support data processing, data analytics, data science and data management across the CACI business. The data platform infrastructure will conform to a zero trust, least privilege architecture, with a strict adherence to data and infrastructure governance and control in a multi-account, multi-region AWS environment. You will use Infrastructure as Code and CI/CD to continuously improve, evolve and repair the platform. You will be able to design architectures and create re-useable solutions to reflect the business needs. Responsibilities will include: Collaborating across CACI departments to develop and maintain the data platform Building infrastructure and data architectures in Cloud Formation, and SAM. Designing and implementing data processing environments and integrations using AWS PaaS such as Glue, EMR, Sagemaker, Redshift, Aurora and Snowflake Building data processing and analytics pipelines as code, using python, SQL, PySpark, spark, CloudFormation, lambda, step functions, Apache Airflow Monitoring and reporting on the data platform performance, usage and security Designing and applying security and access control architectures to secure sensitive data You will have: 8+ years of experience in a Data Engineering role. Strong experience and knowledge of data architectures implemented in AWS using native AWS services such as S3, DataZone, Glue, EMR, Sagemaker, Aurora and Redshift. Experience administrating databases and data platforms Good coding discipline in terms of style, structure, versioning, documentation and unit tests Strong proficiency in Cloud Formation, Python and SQL Knowledge and experience of relational databases such as Postgres, Redshift Experience using Git for code versioning, and lifecycle management Experience operating to Agile principles and ceremonies Hands-on experience with CI/CD tools such as GitLab Strong problem-solving skills and ability to work independently or in a team environment. Excellent communication and collaboration skills. A keen eye for detail, and a passion for accuracy and correctness in numbers Whilst not essential, the following skills would also be useful: Experience using Jira, or other agile project management and issue tracking software Experience with Snowflake Experience with Spatial Data Processing
Posted 1 month ago
5.0 - 10.0 years
10 - 20 Lacs
Bengaluru, Mumbai (All Areas)
Work from Office
Key Responsibilities Design, develop, and optimize data pipelines using Python and AWS services such asGlue, Lambda, S3, EMR, Redshift, Athena, and Kinesis. Implement ETL/ELT processes to extract, transform, and load data from various sources into centralized repositories (e.g., data lakes or data warehouses). Collaborate with cross-functional teams to understand business requirements and translate them into scalable data solutions. Monitor, troubleshoot, and enhance data workflows for performance and cost optimization. Ensure data quality and consistency by implementing validation and governance practices. Work on data security best practices in compliance with organizational policies and regulations. Automate repetitive data engineering tasks using Python scripts and frameworks. Leverage CI/CD pipelines for deployment of data workflows on AWS. Required Skills and Qualifications Professional Experience:5+ years of experiencein data engineering or a related field. Programming: Strong proficiency inPython, with experience in libraries likepandas,pySpark,orboto3. AWS Expertise: Hands-on experience with core AWS services for data engineering, such as: AWS Gluefor ETL/ELT. S3for storage. RedshiftorAthenafor data warehousing and querying. Lambdafor serverless compute. KinesisorSNS/SQSfor data streaming. IAM Rolesfor security. Databases: Proficiency in SQL and experience withrelational(e.g., PostgreSQL, MySQL) andNoSQL(e.g., DynamoDB) databases. Data Processing: Knowledge of big data frameworks (e.g., Hadoop, Spark) is a plus. DevOps: Familiarity with CI/CD pipelines and tools like Jenkins, Git, and CodePipeline. Version Control: Proficient with Git-based workflows. Problem Solving: Excellent analytical and debugging skills. Optional Skills Knowledge ofdata modelinganddata warehouse designprinciples. Experience withdata visualization tools(e.g., Tableau, Power BI). Familiarity with containerization (e.g., Docker) and orchestration (e.g., Kubernetes). Exposure to other programming languages like Scala or Java.
Posted 1 month ago
6.0 - 11.0 years
4 - 8 Lacs
Kolkata
Work from Office
Must have knowledge in Azure Datalake, Azure function, Azure Databricks, Azure data factory, PostgreSQL Working knowledge in Azure devops, Git flow would be an added advantage. (OR) SET 2 Must have working knowledge in AWS Kinesis, AWS EMR, AWS Glue, AWS RDS, AWS Athena, AWS RedShift. Should have demonstrable knowledge and expertise in working with timeseries data. Working knowledge in delivering data engineering / data science projects in Industry 4.0 is an added advantage. Should have knowledge on Palantir. Strong problem-solving skills with an emphasis on sustainable and reusable development. Experience using statistical computer languages to manipulate data and draw insights from large data sets Python/PySpark, Pandas, Numpy seaborn / matplotlib, Knowledge in Streamlit.io is a plus Familiarity with Scala, GoLang, Java would be added advantage. Experience with big data toolsHadoop, Spark, Kafka, etc. Experience with relational databases such as Microsoft SQL Server, MySQL, PostGreSQL, Oracle and NoSQL databases such as Hadoop, Cassandra, Mongo dB Experience with data pipeline and workflow management toolsAzkaban, Luigi, Airflow, etc Experience building and optimizing big data data pipelines, architectures and data sets. Strong analytic skills related to working with unstructured datasets. Primary Skills Provide innovative solutions to the data engineering problems that are faced in the project and solve them with technically superior code & skills. Where possible, should document the process of choosing technology or usage of integration patterns and help in creating a knowledge management artefact that can be used for other similar areas. Create & apply best practices in delivering the project with clean code. Should work innovatively and have a sense of proactiveness in fulfilling the project needs. Additional Information: Reporting to Director- Intelligent Insights and Data Strategy Travel Must be willing to be deployed at client locations anywhere in the world for long and short term as well as should be flexible to travel on shorter duration within India and abroad
Posted 1 month ago
8.0 - 11.0 years
35 - 37 Lacs
Kolkata, Ahmedabad, Bengaluru
Work from Office
Dear Candidate, Seeking a Cloud Monitoring Specialist to set up observability and real-time monitoring in cloud environments. Key Responsibilities: Configure logging and metrics collection. Set up alerts and dashboards using Grafana, Prometheus, etc. Optimize system visibility for performance and security. Required Skills & Qualifications: Familiar with ELK stack, Datadog, New Relic, or Cloud-native monitoring tools. Strong troubleshooting and root cause analysis skills. Knowledge of distributed systems. Soft Skills: Strong troubleshooting and problem-solving skills. Ability to work independently and in a team. Excellent communication and documentation skills. Note: If interested, please share your updated resume and preferred time for a discussion. If shortlisted, our HR team will contact you. Kandi Srinivasa Delivery Manager Integra Technologies
Posted 1 month ago
5.0 - 8.0 years
9 - 19 Lacs
Gurugram, Bengaluru
Work from Office
Hi, Greetings Of the day Hiring for an MNC for a Sr Data Engineer Profile: Sr Data Engineer Experience-4-10years Interview Mode-Virtual Mandatory Skills : Pyspark, Python, AWS(Glue,EC2,Redshift,Lambda) Python, Spark, Big Data, ETL, SQL, etl, Data Warehousing. Good to have: Data structures and algorithms. Responsibilities Bachelor's degree in Computer Science, Engineering, or a related field Proven experience as a Data Engineer or similar role Experience with Python, and big data technologies (Hadoop, Spark, Kafka, etc.) Experience with relational SQL and NoSQL databases Strong analytic skills related to working with unstructured datasets Strong project management and organizational skills Experience with AWS cloud services: EC2, Lambda(step function), RDS, Redshift Ability to work in a team environment Excellent written and verbal communication skills Experience with data pipeline and workflow management tools: Azkaban, Luigi, Airflow, etc. Interested candidates can share the resume on the mail id avanya@niftelresources.com or contact on 9219975840 .
Posted 1 month ago
5.0 - 8.0 years
8 - 18 Lacs
Bengaluru
Hybrid
Technical Skills: Python, Py Spark, Sql, Redshift , S3 , Cloud Watch, Lambda, AWS Glue EMR Step Function Databricks Having knowledge on visulalization tool will add value Experience : Should have worked in technical delivery of above services preferable in similar organizations and having good communication skills. Certifications Preference of AWS Data Engineer Certification
Posted 1 month ago
3.0 - 5.0 years
10 - 15 Lacs
Pune
Work from Office
About the Role: Data Engineer Core Responsibilities: The candidate is expected to lead one of the key analytics areas end-to-end. This is a pure hands-on role. Ensure the solutions built meet the required best practices and coding standards. Ability to adapt to any new technology if the situation demands. Requirement gathering with the business and getting this prioritized in the sprint cycle. Should be able to take end-to-end responsibility of the assigned task Ensure quality and timely delivery. Preference and Experience- Strong at PySpark, Python, and Java fundamentals Good understanding of Data Structure Good at SQL queries/optimization Strong fundamentals of OOP programming Good understanding of AWS Cloud, Big Data. Nice to have Data Lake, AWS Glue, Athena, S3, Kinesis, SQL/NoSQL DB Academic qualifications- Must be a Technical Graduate B. Tech / M. Tech – Tier 1/2 colleges.
Posted 1 month ago
5.0 - 10.0 years
3 - 7 Lacs
Bengaluru
Work from Office
Job Title:DEVOPS- AWS Glue, KMS, ALB , ECS and Terraform/Terragrunt Experience5-10Years Location:Bangalore : DEVOPS, AWS, Glue, KMS, ALB , ECS, Terraform, Terragrunt
Posted 1 month ago
5.0 - 10.0 years
1 - 5 Lacs
Bengaluru
Work from Office
Job Title:AWS Data Engineer Experience5-10 Years Location:Bangalore : Technical Skills: 5 + Years of experience as AWS Data Engineer, AWS S3, Glue Catalog, Glue Crawler, Glue ETL, Athena write Glue ETLs to convert data in AWS RDS for SQL Server and Oracle DB to Parquet format in S3 Execute Glue crawlers to catalog S3 files. Create catalog of S3 files for easier querying Create SQL queries in Athena Define data lifecycle management for S3 files Strong experience in developing, debugging, and optimizing Glue ETL jobs using PySpark or Glue Studio. Ability to connect Glue ETLs with AWS RDS (SQL Server and Oracle) for data extraction and write transformed data into Parquet format in S3. Proficiency in setting up and managing Glue Crawlers to catalog data in S3. Deep understanding of S3 architecture and best practices for storing large datasets. Experience in partitioning and organizing data for efficient querying in S3. Knowledge of Parquet file format advantages for optimized storage and querying. Expertise in creating and managing the AWS Glue Data Catalog to enable structured and schema-aware querying of data in S3. Experience with Amazon Athena for writing complex SQL queries and optimizing query performance. Familiarity with creating views or transformations in Athena for business use cases. Knowledge of securing data in S3 using IAM policies, S3 bucket policies, and KMS encryption. Understanding of regulatory requirements (e.g., GDPR) and implementing secure data handling practices. Non-Technical Skills: Candidate needs to be Good Team Player Effective interpersonal, team building and communication skills. Ability to communicate complex technology to no tech audience in simple and precise manner.
Posted 1 month ago
4.0 - 6.0 years
2 - 6 Lacs
Hyderabad, Pune, Gurugram
Work from Office
Job Title:Sr AWS Data Engineer Experience4-6 Years Location:Pune, Hyderabad, Gurgaon, Bangalore [Hybrid] : PySpark, Python, SQL, AWS Services - S3, Athena, Glue, EMR/Spark, Redshift, Lambda, Step Functions, IAM, CloudWatch.
Posted 1 month ago
7.0 - 12.0 years
4 - 8 Lacs
Bengaluru
Work from Office
Job Title Senior DevOps Engineer Experience 7-14 Years Location Bangalore : Looking for a senior resource with min 7+ years of hands-on experience in AWS DevOps. Must have experience in Terraform/Terrgrunt Azure DevOps pipeline CI/CD Pipeline (GitHub Actions/Jenkins) Hands on experience in provisioning AWS services such as ALB, ECS, Lambda, IAM, KMS, RDS-Postgres, AWS glue Experienced in HA-DR topologies Good to have experience in scripting language like Shell Scripting, Qualification: Bachelor’s or master’s degree in computer science, Information Systems, Engineering or equivalent.
Posted 1 month ago
5.0 - 10.0 years
2 - 5 Lacs
Bengaluru
Work from Office
Job Title:Data Engineer Experience5-10 Years Location:Bangalore : Data Engineers with PySpark and AWS Glue experiences. AWS mandatory. GCP and Azure add-on Proven experience as a Data Engineer or similar role in data architecture, database management, and cloud technologies. Proficiency in programming languages such as Python, Java, or Scala. Strong experience with data processing frameworks like PYSpark, Apache Kafka, or Hadoop. Hands-on experience with data warehousing solutions such as Redshift, BigQuery, Snowflake, or similar platforms. Strong knowledge of SQL and relational databases (e.g., PostgreSQL, MySQL, etc.). Experience with version control tools like Git. Familiarity with containerization and orchestration tools like Docker, Kubernetes, and Airflow is a plus. Strong problem-solving skills, analytical thinking, and attention to detail. Excellent communication skills and ability to collaborate with cross-functional teams. Certifications Needed Bachelor's or master’s degree in Computer Science, Information Systems, Engineering or equivalent.
Posted 1 month ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
20312 Jobs | Dublin
Wipro
11977 Jobs | Bengaluru
EY
8165 Jobs | London
Accenture in India
6667 Jobs | Dublin 2
Uplers
6464 Jobs | Ahmedabad
Amazon
6352 Jobs | Seattle,WA
Oracle
5993 Jobs | Redwood City
IBM
5803 Jobs | Armonk
Capgemini
3897 Jobs | Paris,France
Tata Consultancy Services
3776 Jobs | Thane