Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
1.0 - 3.0 years
3 - 6 Lacs
Bengaluru
Work from Office
Backend developer Job Overview: We are seeking a skilled and motivated Backend Developer with expertise in FastAPI, Python, and SQL to join our dynamic team. As a backend developer, you will be responsible for designing, building, and maintaining high-performance, scalable APIs, as well as ensuring smooth data integration and efficient database management. You will work closely with cross-functional teams, including frontend developers, data engineers, and product managers, to deliver reliable and optimized solutions that meet business requirements. Key Responsibilities: API Development: Design, build, and maintain RESTful APIs using FastAPI . Implement high-performance, scalable, and secure backend services to support web and mobile applications. Ensure APIs are well-documented and adhere to industry standards and best practices. Database Management: Work with SQL databases (e.g., PostgreSQL, MySQL) for efficient data storage, querying, and management. Design and optimize database schemas for performance, scalability, and ease of use. Write complex SQL queries for data retrieval, transformation, and integration with backend systems. Backend System Design: Architect backend services with a focus on performance, scalability, and security. Work with asynchronous programming techniques to ensure efficient request handling. Collaborate with DevOps engineers to deploy, monitor, and maintain backend services in production environments. Security and Compliance: Implement robust authentication and authorization mechanisms (e.g., JWT, OAuth2) to secure backend services. Ensure adherence to security best practices, including protection against common vulnerabilities (e.g., SQL injection, CSRF). Collaboration and Code Review: Collaborate with cross-functional teams (frontend, DevOps, product) to ensure cohesive product development. Participate in code reviews, providing feedback to ensure high-quality code, adherence to best practices, and maintainability. Contribute to continuous improvement processes, identifying ways to improve performance and scalability. Testing and Debugging: Write and maintain unit, integration, and performance tests to ensure API stability and functionality. Troubleshoot, debug, and resolve issues with APIs and backend services to maintain uptime and reliability. Continuous Learning: Stay updated with the latest trends and advancements in Python, FastAPI, database management, and backend development. Propose and implement innovative solutions to improve system performance and developer workflows. Key Skills and Qualifications: Proficiency in FastAPI and deep understanding of building RESTful APIs with Python. Strong knowledge of Python and its frameworks/libraries for backend development. Proficient in SQL and experience working with relational databases like PostgreSQL, MySQL, or similar. Experience with asynchronous programming and concurrent processing in Python. Familiarity with CI/CD pipelines , Docker, Kubernetes, and cloud platforms (AWS, Azure, or GCP) is a plus. Solid understanding of API security and best practices, including experience with JWT and OAuth2 for authentication. Experience with version control systems like Git . Strong debugging, problem-solving, and performance optimization skills. Excellent communication and teamwork skills, with the ability to work in a fast-paced, collaborative environment.
Posted 3 weeks ago
4.0 - 8.0 years
6 - 10 Lacs
Bengaluru
Work from Office
Job Summary: We are seeking an experienced Data Engineer with expertise in Snowflake and PLSQL to design, develop, and optimize scalable data solutions. The ideal candidate will be responsible for building robust data pipelines, managing integrations, and ensuring efficient data processing within the Snowflake environment. This role requires a strong background in SQL, data modeling, and ETL processes, along with the ability to troubleshoot performance issues and collaborate with cross-functional teams. Responsibilities: Design, develop, and maintain data pipelines in Snowflake to support business analytics and reporting. Write optimized PLSQL queries, stored procedures, and scripts for efficient data processing and transformation. Integrate and manage data from various structured and unstructured sources into the Snowflake data platform. Optimize Snowflake performance by tuning queries, managing workloads, and implementing best practices. Collaborate with data architects, analysts, and business teams to develop scalable and high-performing data solutions. Ensure data security, integrity, and governance while handling large-scale datasets. Automate and streamline ETL/ELT workflows for improved efficiency and data consistency. Monitor, troubleshoot, and resolve data quality issues, performance bottlenecks, and system failures. Stay updated on Snowflake advancements, best practices, and industry trends to enhance data engineering capabilities. Required Skills: Bachelor s degree in Engineering, Computer Science, Information Technology, or a related field. Strong experience in Snowflake, including designing, implementing, and optimizing Snowflake-based solutions. Hands-on expertise in PLSQL, including writing and optimizing complex queries, stored procedures, and functions. Proven ability to work with large datasets, data warehousing concepts, and cloud-based data management. Proficiency in SQL, data modeling, and database performance tuning. Experience with ETL/ELT processes and integrating data from multiple sources. Familiarity with cloud platforms such as AWS, Azure, or GCP is an added advantage. Snowflake certifications (e.g., SnowPro Core, SnowPro Advanced) are a plus. Strong analytical skills, problem-solving abilities, and attention to detail. Excellent communication skills and ability to work effectively in a collaborative environment.
Posted 3 weeks ago
3.0 - 7.0 years
4 - 8 Lacs
Bengaluru
Work from Office
Job Overview: We are seeking an experienced and highly skilled Senior Data Engineer to join our team. This role requires a combination of software development and data engineering expertise. The ideal candidate will have advanced knowledge of Python and SQL, a solid understanding of API creation (specifically REST APIs and FastAPI), and experience in building reusable and configurable frameworks. Key Responsibilities: Develop APIs & Microservices: Design, build, and maintain scalable, high-performance REST APIs using FastAPI and other frameworks. Data Engineering: Work on data pipelines, ETL processes, and data processing for robust data solutions. System Architecture: Collaborate on the design and implementation of configurable and reusable frameworks to streamline processes. Collaborate with Cross-Functional Teams: Work closely with software engineers, data scientists, and DevOps teams to build end-to-end solutions that cater to both application and data needs. Slack App Development: Design and implement Slack integrations and custom apps as required for team productivity and automation. Code Quality: Ensure high-quality coding standards through rigorous testing, code reviews, and writing maintainable code. SQL Expertise: Write efficient and optimized SQL queries for data storage, retrieval, and analysis. Microservices Architecture: Build and manage microservices that are modular, scalable, and decoupled. Required Skills & Experience: Programming Languages: Expert in Python, with solid experience building APIs and microservices. Web Frameworks & APIs: Strong hands-on experience with FastAPI and Flask (optional), designing RESTful APIs. Data Engineering Expertise: Strong knowledge of SQL, relational databases, and ETL processes. Experience with cloud-based data solutions is a plus. API & Microservices Architecture: Proven ability to design, develop, and deploy APIs and microservices architectures. Slack App Development: Experience with integrating Slack apps or creating custom Slack workflows. Reusable Framework Development: Ability to design modular and configurable frameworks that can be reused across various teams and systems. Excellent Problem-Solving Skills: Ability to break down complex problems and deliver practical solutions. Software Development Experience: Strong software engineering fundamentals, including version control, debugging, and deployment best practices. Why Join Us Growth Opportunities: You ll work with cutting-edge technologies and continuously improve your technical skills. Collaborative Culture: A dynamic and inclusive team where your ideas and contributions are valued. Competitive Compensation: We offer a competitive salary, comprehensive benefits, and a flexible work environment. Innovative Projects: Be a part of projects that have a real-world impact and help shape the future of data and software development. If you're passionate about working on both data and software engineering, and enjoy building scalable and efficient systems, apply today and help us innovate!
Posted 3 weeks ago
2.0 - 8.0 years
19 - 23 Lacs
Hyderabad
Work from Office
In this role you will be joining the Enterprise Data Solutions team, within the Digital & Information Technology organization. Driven by a passion for operational excellence and innovation, we partner with all functional groups to provide the expertise and technologies which will enable the company to digitalize, simplify, and scale for the future. We are seeking an experienced Sr. Data Engineer to join our Enterprise Data Solutions team. The ideal candidate will have a strong background in data engineering, data analysis, business intelligence, and data management. This role will be responsible for the ingestion, processing, and storage of data in our Azure Databricks Data Lake and SQL Server data warehouses. OVERVIEW: The Enterprise Data Solutions team provides Skillsoft with the data backbone needed to seamlessly connect systems and enable data-driven business insights through democratized and analytics-ready data sets. Our mission is to: Deliver analytics-ready data sets that enhance business insights, drive decision making, and foster a culture of data-driven innovation. Set a gold standard for process, collaboration, and communication. OPPORTUNITY HIGHLIGHTS: Lead the identification of business data requirements, create data models and design processes that align to the business logic and regularly communicate with business stakeholders to ensure delivery meets business needs. Design ETL processes, develop source-to-target mappings/integration workflows and manage load processes to support regular and ad hoc activities considering the needs of down-stream systems, functions and visualizations. Work with the latest open-source tools, libraries, platforms and languages to build data products enabling other analysts to explore and interact with large and complex data sets Build robust systems and reusable code modules to solve problems across the team and organization with an eye on the long-term maintenance and support of the application Perform routine testing of own and others’ work to guarantee accurate, complete processes that support business needs. Awareness and compliance with all organizational development standards, industry best practices and business, security, privacy, and retention requirements. Routinely monitor performance, diagnose and implement tuning/optimization strategies to guarantee a highly efficient data structure. Collaborate with other engineers through active participation in code reviews and challenge the team to deliver with precision, consistency and speed. Document data flows and technical designs to ensure compliance with organization, business and security best practices. Regularly monitor timelines and workload. Ensure delivery promises are met or exceeded. Ability and willingness to support the BI mission through learning new technologies and supporting other projects as needed. Provides code reviews and technical guidance to the team. Collaborate closely with the SA and TPO and get the requirements and develop the enterprise solutions SKILLS & QUALIFICATIONS: Bachelor’s degree in quantitative field – engineering, finance, data science, statistics, economics, or other quantitative. 5+ years of experience in Data Engineering/Data Management space and working with enterprise level production data warehouses. 5+ years of experience in working with Azure Databricks 5+ years experience in SQL and PySpark Ability to work in an Agile methodology environment. Experience and interest in cloud migration/journey to the cloud for data platforms and landscape Strong business acumen, analytical skills, and technical abilities Practical problem-solving skills and ability to move complex projects forward.
Posted 3 weeks ago
3.0 - 7.0 years
5 - 9 Lacs
Bengaluru
Work from Office
Job Overview: We are seeking an experienced and highly skilled Senior Data Engineer to join our team. This role requires a combination of software development and data engineering expertise. The ideal candidate will have advanced knowledge of Python and SQL, a solid understanding of API creation (specifically REST APIs and FastAPI), and experience in building reusable and configurable frameworks. Key Responsibilities: Develop APIs & Microservices: Design, build, and maintain scalable, high-performance REST APIs using FastAPI and other frameworks. Data Engineering: Work on data pipelines, ETL processes, and data processing for robust data solutions. System Architecture: Collaborate on the design and implementation of configurable and reusable frameworks to streamline processes. Collaborate with Cross-Functional Teams: Work closely with software engineers, data scientists, and DevOps teams to build end-to-end solutions that cater to both application and data needs. Slack App Development: Design and implement Slack integrations and custom apps as required for team productivity and automation. Code Quality: Ensure high-quality coding standards through rigorous testing, code reviews, and writing maintainable code. SQL Expertise: Write efficient and optimized SQL queries for data storage, retrieval, and analysis. Microservices Architecture: Build and manage microservices that are modular, scalable, and decoupled. Required Skills & Experience: Programming Languages: Expert in Python, with solid experience building APIs and microservices. Web Frameworks & APIs: Strong hands-on experience with FastAPI and Flask (optional), designing RESTful APIs. Data Engineering Expertise: Strong knowledge of SQL, relational databases, and ETL processes. Experience with cloud-based data solutions is a plus. API & Microservices Architecture: Proven ability to design, develop, and deploy APIs and microservices architectures. Slack App Development: Experience with integrating Slack apps or creating custom Slack workflows. Reusable Framework Development: Ability to design modular and configurable frameworks that can be reused across various teams and systems. Excellent Problem-Solving Skills: Ability to break down complex problems and deliver practical solutions. Software Development Experience: Strong software engineering fundamentals, including version control, debugging, and deployment best practices. Why Join Us Growth Opportunities: You ll work with cutting-edge technologies and continuously improve your technical skills. Collaborative Culture: A dynamic and inclusive team where your ideas and contributions are valued. Competitive Compensation: We offer a competitive salary, comprehensive benefits, and a flexible work environment. Innovative Projects: Be a part of projects that have a real-world impact and help shape the future of data and software development. If you're passionate about working on both data and software engineering, and enjoy building scalable and efficient systems, apply today and help us innovate!
Posted 3 weeks ago
6.0 - 11.0 years
6 - 10 Lacs
Bengaluru
Work from Office
We are seeking a highly skilled Snowflake Developer to join our team in Bangalore. The ideal candidate will have extensive experience in designing, implementing, and managing Snowflake-based data solutions. This role involves developing data architectures and ensuring the effective use of Snowflake to drive business insights and innovation. Key Responsibilities: Design and implement scalable, efficient, and secure Snowflake solutions to meet business requirements. Develop data architecture frameworks, standards, and principles, including modeling, metadata, security, and reference data. Implement Snowflake-based data warehouses, data lakes, and data integration solutions. Manage data ingestion, transformation, and loading processes to ensure data quality and performance. Collaborate with business stakeholders and IT teams to develop data strategies and ensure alignment with business goals. Drive continuous improvement by leveraging the latest Snowflake features and industry trends. Qualifications: Bachelor s or Master s degree in Computer Science, Information Technology, Data Science, or a related field. 6+ years of experience in data architecture, data engineering, or a related field. Extensive experience with Snowflake, including designing and implementing Snowflake-based solutions. Must have exposure working in Airflow Proven track record of contributing to data projects and working in complex environments. Familiarity with cloud platforms (e.g., AWS, GCP) and their data services. Snowflake certification (e.g., SnowPro Core, SnowPro Advanced) is a plus.
Posted 3 weeks ago
7.0 - 12.0 years
7 - 11 Lacs
Bengaluru
Work from Office
Lead Python Developer Experience - 7+Years Location - Bangalore/Hyderabad Job Overview We are seeking an experienced and highly skilled Senior Data Engineer to join our team. This role requires a combination of software development and data engineering expertise. The ideal candidate will have advanced knowledge of Python and SQL, a solid understanding of API creation (specifically REST APIs and FastAPI), and experience in building reusable and configurable frameworks. Key Responsibilities: Develop APIs & Microservices Design, build, and maintain scalable, high-performance REST APIs using FastAPI and other frameworks. Data Engineering Work on data pipelines, ETL processes, and data processing for robust data solutions. System Architecture Collaborate on the design and implementation of configurable and reusable frameworks to streamline processes. Collaborate with Cross-Functional Teams Work closely with software engineers, data scientists, and DevOps teams to build end-to-end solutions that cater to both application and data needs. Slack App Development Design and implement Slack integrations and custom apps as required for team productivity and automation. Code Quality Ensure high-quality coding standards through rigorous testing, code reviews, and writing maintainable code. SQL Expertise Write efficient and optimized SQL queries for data storage, retrieval, and analysis. Microservices Architecture Build and manage microservices that are modular, scalable, and decoupled. Required Skills & Experience: Programming Languages Expert in Python, with solid experience building APIs and microservices. Web Frameworks & APIs Strong hands-on experience with FastAPI and Flask (optional), designing RESTful APIs. Data Engineering Expertise Strong knowledge of SQL, relational databases, and ETL processes. Experience with cloud-based data solutions is a plus. API & Microservices Architecture Proven ability to design, develop, and deploy APIs and microservices architectures. Slack App Development Experience with integrating Slack apps or creating custom Slack workflows. Reusable Framework Development: Ability to design modular and configurable frameworks that can be reused across various teams and systems. Excellent Problem-Solving Skills: Ability to break down complex problems and deliver practical solutions. Software Development Experience Strong software engineering fundamentals, including version control, debugging, and deployment best practices. Why Join Us Growth Opportunities You ll work with cutting-edge technologies and continuously improve your technical skills. Collaborative Culture A dynamic and inclusive team where your ideas and contributions are valued. Competitive Compensation We offer a competitive salary, comprehensive benefits, and a flexible work environment. Innovative Projects Be a part of projects that have a real-world impact and help shape the future of data and software development. If you're passionate about working on both data and software engineering, and enjoy building scalable and efficient systems, apply today and help us innovate!
Posted 3 weeks ago
4.0 - 9.0 years
6 - 10 Lacs
Bengaluru
Work from Office
Bachelor s or master s degree in computer science, Information Technology, Data Science, or a related field. Must have minimum 4 years of relevant experience Proficient in Python with hands-on experience building ETL pipelines for data extraction, transformation, and validation. Strong SQL skills for working with structured data. Familiar with Grafana or Kibana for data visualization and monitoring/dashboards. Experience with databases such as MongoDB, Elasticsearch, and MySQL. Comfortable working in Linux environments using common Unix tools. Hands-on experience with Git, Docker and virtual machines.
Posted 3 weeks ago
1.0 - 4.0 years
3 - 7 Lacs
Bengaluru
Work from Office
Build data integrations, data models to support analytical needs for this project. below. Translate business requirements into technical requirements as needed Design and develop automated scripts for data pipelines to process and transform as per the requirements and monitor those Produce artifacts such as data flow diagrams, designs, data model along with git code as deliverable Use tools or programming languages such as SQL, Python, Snowflake, Airflow, dbt, Salesforce Data cloud Ensure data accuracy, timeliness, and reliability throughout the pipeline. Complete QA, data profiling to ensure data is ready as per the requirements for UAT Collaborate with stakeholders on business, Visualization team and support enhancements Timely updates on the sprint boards, task updates Team lead to provide timely project updates on all the projects Project experience with version control systems and CICD such as GIT, GitFlow, Bitbucket, Jenkins etc. Participate in UAT to resolve findings and plan Go Live/Production deployment Milestones: Data Integration Plan into Data Cloud for structured and unstructured data/RAG needs for the Sales AI use cases Design Data Models and semantic layer on Salesforce AI Agentforce Prompt Integration Data Quality and sourcing enhancements Write Agentforce Prompts and refine as needed Assist decision scientist on the data needs Collaborate with EA team and participate in design review Performance Tuning and Optimization of Data Pipelines Hypercare after the deployment Project Review and Knowledge Transfer
Posted 3 weeks ago
5.0 - 8.0 years
15 - 30 Lacs
Pune
Hybrid
Skills- Data Engineer, Azure Data Factory (ADF), SQL, Power BI, SSRS, SSIS, SSAS, ETL, Data Bricks, Data Integration, Data Model
Posted 3 weeks ago
3.0 - 5.0 years
3 - 8 Lacs
Indore, Dewas, Pune
Work from Office
Key Responsibilities: Design, develop, and maintain scalable ETL/ELT pipelines for data ingestion and processing Work with stakeholders to understand data requirements and translate them into technical solutions Ensure data quality, reliability, and governance Optimize data storage and retrieval for performance and cost efficiency Collaborate with Data Scientists, Analysts, and Developers to support their data needs Maintain and enhance data architecture to support business growth Required Skills: Strong experience with SQL and relational databases (MySQL, PostgreSQL, etc.) Hands-on experience with Big Data technologies (Spark, Hadoop, Hive, etc.) Proficiency in Python/Scala/Java for data engineering tasks Experience with cloud platforms (AWS, Azure, or GCP) Familiarity with data warehouse solutions (Redshift, Snowflake, BigQuery, etc.) Knowledge of workflow orchestration tools (Airflow, Luigi, etc.) Good to Have: Experience with real-time data streaming (Kafka, Flink, etc.) Understanding of CI/CD and DevOps practices for data workflows Exposure to data security, compliance, and data privacy practices Qualifications: Bachelors/Master’s degree in Computer Science, IT, or a related field Minimum 3 years of experience in data engineering or related field
Posted 3 weeks ago
8.0 - 11.0 years
6 - 10 Lacs
Hyderabad
Work from Office
Data Engineering Associate Advisor - HIH - Evernorth Position Summary: Data Engineering Advisor demonstrates expertise in data engineering technologies with the focus on engineering, innovation, strategic influence and product mindset. This individual will act as key contributor of the team to design, build, test and deliver large-scale software applications, systems, platforms, services or technologies in the data engineering space. This individual will have the opportunity to work directly with partner IT and business teams, owning and driving major deliverables across all aspects of software delivery. The candidate will play a key role in automating the processes on Databricks and AWS. They collaborate with business and technology partners in gathering requirements, develop and implement. The individual must have strong analytical and technical skills coupled with the ability to positively influence on delivery of data engineering products. The applicant will be working in a team that demands innovation, cloud-first, self-service-first, and automation-first mindset coupled with technical excellence. The applicant will be working with internal and external stakeholders and customers for building solutions as part of Enterprise Data Engineering and will need to demonstrate very strong technical and communication skills. Delivery – Intermediate delivery skills including the ability to deliver work at a steady, predictable pace to achieve commitments, decompose work assignments into small batch releases and contribute to tradeoff and negotiation discussions. Domain Expertise – Demonstrated track record of domain expertise including the ability to understand technical concepts necessary to do the job effectively, demonstrate willingness, cooperation, and concern for business issues and possess in-depth knowledge of immediate systems worked on. Problem Solving – Proven problem solving skills including debugging skills, allowing you to determine source of issues in unfamiliar code or systems and the ability to recognize and solve repetitive problems rather than working around them, recognize mistakes using them as learning opportunities and break down large problems into smaller, more manageable ones & Responsibilities : The candidate will be responsible to deliver business needs end to end from requirements to development into production. Through hands-on engineering approach in Databricks environment, this individual will deliver data engineering toolchains, platform capabilities and reusable patterns. The applicant will be responsible to follow software engineering best practices with an automation first approach and continuous learning and improvement mindset. The applicant will ensure adherence to enterprise architecture direction and architectural standards. The applicant should be able to collaborate in a high-performing team environment, and an ability to influence and be influenced by others. Experience Required: 8 to 11 years of experience in software engineering, building data engineering pipelines, middleware and API development and automation More than 3 years of experience in Databricks within an AWS environment Data Engineering experience Experience Desired: Expertise in Agile software development principles and patterns Expertise in building streaming, batch and event-driven architectures and data pipelines Primary Skills: Expertise in Big data technologies such as Spark, Hadoop, Databricks, Snowflake, EMR, Glue Good understanding of Kafka, Kafka Streams, Spark Structured streaming, configuration-driven data transformation and curation Expertise in building cloud-native microservices, containers, Kubernetes and platform-as-a-service technologies such as OpenShift, CloudFoundry Experience in multi-cloud software-as-a-service products such as Databricks, Snowflake Experience in Infrastructure-as-Code (IaC) tools such as terraform and AWS cloudformation Experience in messaging systems such as Apache ActiveMQ, WebSphere MQ, Apache Artemis, Kafka, AWS SNS Experience in API and microservices stack such as Spring Boot, Quarkus, Expertise in Cloud technologies such as AWS Glue, Lambda, S3, Elastic Search, API Gateway, CloudFront Experience with one or more of the following programming and scripting languages – Python, Scala, JVM-based languages, or JavaScript, and ability to pick up new languages Experience in building CI/CD pipelines using Jenkins, Github Actions Strong expertise with source code management and its best practices Proficient in self-testing of applications, unit testing and use of mock frameworks, test-driven development (TDD) Knowledge on Behavioral Driven Development (BDD) approach Additional Skills: Cloud-based security principles and protocols like OAuth2, JWT, data encryption, hashing data, secret management, etc Ability to perform detailed analysis of business problems and technical environments Strong oral and written communication skills Ability to think strategically, implement iteratively and estimate financial impact of design/architecture alternatives Continuous focus on an on-going learning and developmen About Evernorth Health Services Evernorth Health Services, a division of The Cigna Group, creates pharmacy, care and benefit solutions to improve health and increase vitality. We relentlessly innovate to make the prediction, prevention and treatment of illness and disease more accessible to millions of people. Join us in driving growth and improving lives.
Posted 3 weeks ago
8.0 - 13.0 years
3 - 6 Lacs
Bengaluru
Work from Office
Location Bengaluru : We are seeking a highly skilled and motivated Data Engineer to join our dynamic team. The ideal candidate will have extensive experience in data engineering, with a strong focus on Databricks, Python, and SQL. As a Data Engineer, you will play a crucial role in designing, developing, and maintaining our data infrastructure to support various business needs. Key Responsibilities Develop and implement efficient data pipelines and ETL processes to migrate and manage client, investment, and accounting data in Databricks Work closely with the investment management team to understand data structures and business requirements, ensuring data accuracy and quality. Monitor and troubleshoot data pipelines, ensuring high availability and reliability of data systems. Optimize database performance by designing scalable and cost-effective solutions. What s on offer Competitive salary and benefits package. Opportunities for professional growth and development. A collaborative and inclusive work environment. The chance to work on impactful projects with a talented team. Candidate Profile Experience: 8+ years of experience in data engineering or a similar role. Proficiency in Apache Spark. Databricks Data Cloud, including schema design, data partitioning, and query optimization Exposure to Azure. Exposure to Streaming technologies. (e.g Autoloader, DLT Streaming) Advanced SQL, data modeling skills and data warehousing concepts tailored to investment management data (e.g., transaction, accounting, portfolio data, reference data etc). Experience with ETL/ELT tools like snap logic and programming languages (e.g., Python, Scala, R programing). Familiarity workload automation and job scheduling tool such as Control M. Familiar with data governance frameworks and security protocols. Excellent problem-solving skills and attention to detail. Strong communication and collaboration skills. Education Bachelor s degree in computer science, IT, or a related discipline. Not Ready to Apply Join our talent pool and we'll reach out when a job fits your skills.
Posted 3 weeks ago
1.0 - 3.0 years
3 - 5 Lacs
Hyderabad
Work from Office
What you will do In this vital role you will be responsible for designing, building, maintaining, analyzing, and interpreting data to provide actionable insights that drive business decisions. This role involves working with large datasets, developing reports, supporting and performing data governance initiatives and, visualizing data to ensure data is accessible, reliable, and efficiently managed. The ideal candidate has deep technical skills, experience with big data technologies, and a deep understanding of data architecture and ETL processes. Roles & Responsibilities: Design, develop, and maintain data solutions for data generation, collection, and processing Be a crucial team member that assists in design and development of the data pipeline Build data pipelines and ensure data quality by implementing ETL processes to migrate and deploy data across systems Contribute to the design, development, and implementation of data pipelines, ETL/ELT processes, and data integration solutions Take ownership of data pipeline projects from inception to deployment, manage scope, timelines, and risks Collaborate with cross-functional teams to understand data requirements and design solutions that meet business needs Develop and maintain data models, data dictionaries, and other documentation to ensure data accuracy and consistency Implement data security and privacy measures to protect sensitive data Leverage cloud platforms (AWS preferred) to build scalable and efficient data solutions Collaborate and communicate effectively with product teams Collaborate with Data Architects, Business SMEs, and Data Scientists to design and develop end-to-end data pipelines to meet fast-paced business needs across geographic regions Identify and resolve complex data-related challenges Adhere to best practices for coding, testing, and designing reusable code/component Explore new tools and technologies that will help to improve ETL platform performance Participate in sprint planning meetings and provide estimations on technical implementation Basic Qualifications: Masters degree and 1 to 3 years of Computer Science, IT or related field experience OR Bachelors degree and 3 to 5 years of Computer Science, IT or related field experience OR Diploma and 7 to 9 years of Computer Science, IT or related field experience Preferred Qualifications: Must-Have Skills: Hands-on experience with big data technologies and platforms, such as Databricks, Apache Spark (PySpark, SparkSQL), workflow orchestration, performance tuning on big data processing Proficiency in data analysis tools (eg. SQL) and experience with data visualization tools Excellent problem-solving skills and the ability to work with large, complex datasets Solid understanding of data governance frameworks, tools, and best practices. Knowledge of data protection regulations and compliance requirements Good-to-Have Skills: Experience with ETL tools such as Apache Spark, and various Python packages related to data processing, machine learning model development Good understanding of data modeling, data warehousing, and data integration concepts Knowledge of Python/R, Databricks, SageMaker, cloud data platforms Professional Certifications Certified Data Engineer / Data Analyst (preferred on Databricks or cloud environments) Soft Skills: Excellent critical-thinking and problem-solving skills Good communication and collaboration skills Demonstrated awareness of how to function in a team setting Demonstrated presentation skills
Posted 3 weeks ago
9.0 - 13.0 years
11 - 15 Lacs
Hyderabad
Work from Office
Role Description: We are seeking a seasoned Engineering Manager (Data Engineering) to lead the end-to-end management of enterprise data assets and operational data workflows. This role is critical in ensuring the availability, quality, consistency, and timeliness of data across platforms and functions, supporting analytics, reporting, compliance, and digital transformation initiatives. You will be responsible for the day-to-day data operations, manage a team of data professionals, and drive process excellence in data intake, transformation, validation, and delivery. You will work closely with cross-functional teams including data engineering, analytics, IT, governance, and business stakeholders to align operational data capabilities with enterprise needs. Roles & Responsibilities: Lead and manage the enterprise data operations team, responsible for data ingestion, processing, validation, quality control, and publishing to various downstream systems. Define and implement standard operating procedures for data lifecycle management, ensuring accuracy, completeness, and integrity of critical data assets. Oversee and continuously improve daily operational workflows, including scheduling, monitoring, and troubleshooting data jobs across cloud and on-premise environments. Establish and track key data operations metrics (SLAs, throughput, latency, data quality, incident resolution) and drive continuous improvements. Partner with data engineering and platform teams to optimize pipelines, support new data integrations, and ensure scalability and resilience of operational data flows. Collaborate with data governance, compliance, and security teams to maintain regulatory compliance, data privacy, and access controls. Serve as the primary escalation point for data incidents and outages, ensuring rapid response and root cause analysis. Build strong relationships with business and analytics teams to understand data consumption patterns, prioritize operational needs, and align with business objectives. Drive adoption of best practices for documentation, metadata, lineage, and change management across data operations processes. Mentor and develop a high-performing team of data operations analysts and leads. Functional Skills: Must-Have Skills: Experience managing a team of data engineers in biotech/pharma domain companies. Experience in designing and maintaining data pipelines and analytics solutions that extract, transform, and load data from multiple source systems. Demonstrated hands-on experience with cloud platforms (AWS) and the ability to architect cost-effective and scalable data solutions. Experience managing data workflows in cloud environments such as AWS, Azure, or GCP. Strong problem-solving skills with the ability to analyze complex data flow issues and implement sustainable solutions. Working knowledge of SQL, Python, or scripting languages for process monitoring and automation. Experience collaborating with data engineering, analytics, IT operations, and business teams in a matrixed organization. Familiarity with data governance, metadata management, access control, and regulatory requirements (e.g., GDPR, HIPAA, SOX). Excellent leadership, communication, and stakeholder engagement skills. Well versed with full stack development & DataOps automation, logging frameworks, and pipeline orchestration tools. Strong analytical and problem-solving skills to address complex data challenges. Effective communication and interpersonal skills to collaborate with cross-functional teams. Good-to-Have Skills: Data Engineering Management experience in Biotech/Life Sciences/Pharma Experience using graph databases such as Stardog or Marklogic or Neo4J or Allegrograph, etc. Education and Professional Certifications Any Degree and 9-13 years of experience AWS Certified Data Engineer preferred Databricks Certificate preferred Scaled Agile SAFe certification preferred Soft Skills: Excellent analytical and troubleshooting skills Strong verbal and written communication skills Ability to work effectively with global, virtual teams High degree of initiative and self-motivation Ability to manage multiple priorities successfully Team-oriented, with a focus on achieving team goals Strong presentation and public speaking skills
Posted 3 weeks ago
7.0 - 12.0 years
15 - 30 Lacs
Chennai
Hybrid
Join our Mega Tech Recruitment Drive at TaskUs Chennai - where bold ideas, real impact, and ridiculous innovation come together. Who are we hiring for? We are hiring for Developers, Senior Developers, Leads, Architects, and more. When is it happening? 24th June 2025, 9 AM to 4 PM IST. Which skills are we hiring for? Dot Net Full Stack: AWS/Azure + Angular/React/Vue.Js Oracle Fusion: Functional Finance (AP, AR, GL, CM and Tax) Senior Data Engineer: Tableau Dashboard / Clikview / PowerBi, Azure Databricks, PySpark, Databricks SQL, JupyterHub/ PyCharm. SQL Server Database Administrator: SQL Server Admin (Both Cloud & On-Prem) Workday Integration Developer: Workday integration tools (Studio, EIB), Workday Matrix, XML, XSLT Workday Configuration Lead Developer: Workday configuration tools (Studio, EIB), Workday Matrix, XML, XSLT, xPath, Simple, Matrix, Composite, Advanced About TaskUs: TaskUs is a provider of outsourced digital services and next-generation customer experience to fast-growing technology companies, helping its clients represent, protect and grow their brands. Leveraging a cloud-based infrastructure, TaskUs serves clients in the fastest-growing sectors, including social media, e-commerce, gaming, streaming media, food delivery, ride-sharing, HiTech, FinTech, and HealthTech. The People First culture at TaskUs has enabled the company to expand its workforce to approximately 45,000 employees globally. Presently, we have a presence in twenty-three locations across twelve countries, which include the Philippines, India, and the United States. What We Offer: At TaskUs, we prioritize our employees' well-being by offering competitive industry salaries and comprehensive benefits packages. Our commitment to a People First culture is reflected in the various departments we have established, including Total Rewards, Wellness, HR, and Diversity. We take pride in our inclusive environment and positive impact on the community. Moreover, we actively encourage internal mobility and professional growth at all stages of an employee's career within TaskUs. Join our team today and experience firsthand our dedication to supporting People First.
Posted 3 weeks ago
5.0 - 10.0 years
35 - 40 Lacs
Pune
Work from Office
: Job Title- Data Engineer (ETL, Big Data, Hadoop, Spark, GCP), AVP Location- Pune, India Role Description Engineer is responsible for developing and delivering elements of engineering solutions to accomplish business goals. Awareness is expected of the important engineering principles of the bank. Root cause analysis skills develop through addressing enhancements and fixes 2 products build reliability and resiliency into solutions through early testing peer reviews and automating the delivery life cycle. Successful candidate should be able to work independently on medium to large sized projects with strict deadlines. Successful candidates should be able to work in a cross application mixed technical environment and must demonstrate solid hands-on development track record while working on an agile methodology. The role demands working alongside a geographically dispersed team. The position is required as a part of the buildout of Compliance tech internal development team in India. The overall team will primarily deliver improvements in Com in compliance tech capabilities that are major components of the regular regulatory portfolio addressing various regulatory common commitments to mandate monitors. What we'll offer you As part of our flexible scheme, here are just some of the benefits that youll enjoy Best in class leave policy Gender neutral parental leaves 100% reimbursement under childcare assistance benefit (gender neutral) Sponsorship for Industry relevant certifications and education Employee Assistance Program for you and your family members Comprehensive Hospitalization Insurance for you and your dependents Accident and Term life Insurance Complementary Health screening for 35 yrs. and above Your key responsibilities Analyzing data sets and designing and coding stable and scalable data ingestion workflows also integrating into existing workflows Working with team members and stakeholders to clarify requirements and provide the appropriate ETL solution. Work as a senior developer for developing analytics algorithm on top of ingested data. Work as though senior developer for various data sourcing in Hadoop also GCP. Ensuring new code is tested both at unit level and system level design develop and peer review new code and functionality. Operate as a team member of an agile scrum team. Root cause analysis skills to identify bugs and issues for failures. Support Prod support and release management teams in their tasks. Your skills and experience: More than 6+ years of coding experience in experience and reputed organizations Hands on experience in Bitbucket and CI/CD pipelines Proficient in Hadoop, Python, Spark ,SQL Unix and Hive Basic understanding of on Prem and GCP data security Hands on development experience on large ETL/ big data systems .GCP being a big plus Hands on experience on cloud build, artifact registry ,cloud DNS ,cloud load balancing etc. Hands on experience on Data flow, Cloud composer, Cloud storage ,Data proc etc. Basic understanding of data quality dimensions like Consistency, Completeness, Accuracy, Lineage etc. Hands on business and systems knowledge gained in a regulatory delivery environment. Banking experience regulatory and cross product knowledge. Passionate about test driven development. Prior experience with release management tasks and responsibilities. Data visualization experience in tableau is good to have. How we'll support you Training and development to help you excel in your career. Coaching and support from experts in your team A culture of continuous learning to aid progression. A range of flexible benefits that you can tailor to suit your needs.
Posted 3 weeks ago
6.0 - 11.0 years
19 - 25 Lacs
Bengaluru
Work from Office
About us: As a Fortune 50 company with more than 400,000 team members worldwide, Target is an iconic brand and one of America's leading retailers. Working at Target means the opportunity to help all families discover the joy of everyday life. Caring for our communities is woven into who we are, and we invest in the places we collectively live, work and play. We prioritize relationships, fuel and develop talent by creating growth opportunities, and succeed as one Target team. At our core, our purpose is ingrained in who we are, what we value, and how we work. It s how we care, grow, and win together. Team overview : The Data platform at Target is used by 5,000+ Target Team members and Target s vast network of vendors to easily turn our data into a strategic advantage. Through the full end-to-end supply chain of data, from source to dashboard, we ensure each technical and non-technical person has the right tools to access, use, and communicate with data. Product Teams at Target Corporation are accountable for the delivery of business outcomes enabled through technology and analytic products that are easy to use, easily maintained and highly reliable. Product teams have one shared backlog that is inclusive of all product, technology and design work. Role overview As a Sr Product Manager , focused on the Data Platform Portfolio, you will be responsible for understanding the needs of our data engineering, platform engineering and ML Engineering teams. You will partner with your platform engineering teams and other PMs to build the infrastructure and the tools for a scalable, performant and reliable data platform that can power all the analytical, Data Science, AI and ML use-cases for Target. As a Product Manager for the data platform, you will be leading the efforts to identify the capabilities and features that are needed in the modern data platform,understand the consumption patterns for the data by all the personas and how would these capabilities be powered to solve the problems of the users. You will set the tone, vision, strategy, OKRs and prioritization for this capability. You will be the voice of the Customer with your product team and stakeholders to ensure that their needs are met, and you will be responsible to maintain and refine the product backlog (create user stories & acceptance criteria) while prioritizing the backlog to focus on the highest impact work for your team and stakeholders. You will encourage the open exchange of information and viewpoints, as well as inspire others to achieve challenging goals and high standards of performance while committing to the organization's direction. You will foster sense of urgency to achieve goals and leverage resources to overcome unexpected obstacles. Core responsibilities of this job are described within this job description. Job duties may change at any time due to business needs. About you Four-year degree in Computer Science Engineering or equivalent experience 6+ years of product management experience in Data Platforms and for developer focused products. Strongly prefer someone who has worked as a Data Engineer or as a Data Platform engineer for more than 3 years Strong understanding of Big Data and Cloud technologies including compute, storage and query layers Strong ability to influence others without direct authority Strong ability to identify and build great relationships with key users, leaders, and engineering teams Strong ability to work in an agile, collaborative and matrixed environment Proactive communication, both verbal and written, a must Proven track record of product leadership Understanding of product lifecycle and product startups a plus Useful Links: Life at Target- https://india.target.com/ Benefits- https://india.target.com/life-at-target/workplace/benefits
Posted 3 weeks ago
4.0 - 9.0 years
9 - 13 Lacs
Bengaluru
Work from Office
About us: As a Fortune 50 company with more than 400,000 team members worldwide, Target is an iconic brand and one of America's leading retailers. Joining Target means promoting a culture of mutual care and respect and striving to make the most meaningful and positive impact. Becoming a Target team member means joining a community that values different voices and lifts each other up. Here, we believe your unique perspective is important, and you'll build relationships by being authentic and respectful. Overview about TII: At Target, we have a timeless purpose and a proven strategy. And that hasn t happened by accident. Some of the best minds from different backgrounds come together at Target to redefine retail in an inclusive learning environment that values people and delivers world-class outcomes. That winning formula is especially apparent in Bengaluru, where Target in India operates as a fully integrated part of Target s global team and has more than 4,000 team members supporting the company s global strategy and operations. Team Overview: Every time a guest enters a Target store or browses Target.com nor the app, they experience the impact of Target s investments in technology and innovation. We re the technologists behind one of the most loved retail brands, delivering joy to millions of our guests, team members, and communities. Join our global in-house technology team of more than 5,000 of engineers, data scientists, architects and product managers striving to make Target the most convenient, safe and joyful place to shop. We use agile practices and leverage open-source software to adapt and build best-in-class technology for our team members and guests and we do so with a focus on diversity and inclusion, experimentation and continuous learning. At Target, we are gearing up for exponential growth and continuously expanding our guest experience. To support this expansion, Data Engineering is building robust warehouses and enhancing existing datasets to meet business needs across the enterprise. We are looking for talented individuals who are passionate about innovative technology, data warehousing and are eager to contribute to data engineering. . Position Overview Assess client needs and convert business requirements into business intelligence (BI) solutions roadmap relating to complex issues involving long-term or multi-work streams. Analyze technical issues and questions identifying data needs and delivery mechanisms Implement data structures using best practices in data modeling, ETL/ELT processes, Spark, Scala, SQL, database, and OLAP technologies Manage overall development cycle, driving best practices and ensuring development of high quality code for common assets and framework components Develop test-driven solutions and provide technical guidance and heavily contribute to a team of high caliber Data Engineers by developing test-driven solutions and BI Applications that can be deployed quickly and in an automated fashion. Manage and execute against agile plans and set deadlines based on client, business, and technical requirements Drive resolution of technology roadblocks including code, infrastructure, build, deployment, and operations Ensure all code adheres to development & security standards About you 4 year degree or equivalent experience 5+ years of software development experience preferably in data engineering/Hadoop development (Hive, Spark etc.) Hands on Experience in Object Oriented or functional programming such as Scala / Java / Python Knowledge or experience with a variety of database technologies (Postgres, Cassandra, SQL Server) Knowledge with design of data integration using API and streaming technologies (Kafka) as well as ETL and other data Integration patterns Experience with cloud platforms like Google Cloud, AWS, or Azure. Hands on Experience on BigQuery will be an added advantage Good understanding of distributed storage(HDFS, Google Cloud Storage, Amazon S3) and processing(Spark, Google Dataproc, Amazon EMR or Databricks) Experience with CI/CD toolchain (Drone, Jenkins, Vela, Kubernetes) a plus Familiarity with data warehousing concepts and technologies. Maintains technical knowledge within areas of expertise Constant learner and team player who enjoys solving tech challenges with global team. Hands on experience in building complex data pipelines and flow optimizations Be able to understand the data, draw insights and make recommendations and be able to identify any data quality issues upfront Experience with test-driven development and software test automation Follow best coding practices & engineering guidelines as prescribed Strong written and verbal communication skills with the ability to present complex technical information in a clear and concise manner to variety of audiences Life at Target- https://india.target.com/ Benefits- https://india.target.com/life-at-target/workplace/benefits Culture- https://india.target.com/life-at-target/belonging
Posted 3 weeks ago
4.0 - 9.0 years
9 - 13 Lacs
Bengaluru
Work from Office
About us: As a Fortune 50 company with more than 400,000 team members worldwide, Target is an iconic brand and one of America's leading retailers. Joining Target means promoting a culture of mutual care and respect and striving to make the most meaningful and positive impact. Becoming a Target team member means joining a community that values different voices and lifts each other up. Here, we believe your unique perspective is important, and you'll build relationships by being authentic and respectful. Overview about TII At Target, we have a timeless purpose and a proven strategy. And that hasn t happened by accident. Some of the best minds from different backgrounds come together at Target to redefine retail in an inclusive learning environment that values people and delivers world-class outcomes. That winning formula is especially apparent in Bengaluru, where Target in India operates as a fully integrated part of Target s global team and has more than 4,000 team members supporting the company s global strategy and operations. Pyramid overview A role with Target Data Science & Engineering means the chance to help develop and manage state of the art predictive algorithms that use data at scale to automate and optimize decisions at scale. Whether you join our Statistics, Optimization or Machine Learning teams, you ll be challenged to harness Target s impressive data breadth to build the algorithms that power solutions our partners in Marketing, Supply Chain Optimization, Network Security and Personalization rely on. Position Overview As a Senior Engineer on the Search Team , you serve as a specialist in the engineering team that supports the product. You help develop and gain insight in the application architecture. You can distill an abstract architecture into concrete design and influence the implementation. You show expertise in applying the appropriate software engineering patterns to build robust and scalable systems. You are an expert in programming and apply your skills in developing the product. You have the skills to design and implement the architecture on your own, but choose to influence your fellow engineers by proposing software designs, providing feedback on software designs and/or implementation. You leverage data science in solving complex business problems. You make decisions based on data. You show good problem solving skills and can help the team in triaging operational issues. You leverage your expertise in eliminating repeat occurrences. About You 4-year degree in Quantitative disciplines (Science, Technology, Engineering, Mathematics) or equivalent experience Experience with Search Engines like SOLR and Elastic Search Strong hands-on programming skills in Java, Kotlin, Micronaut, Python, Experience on Pyspark, SQL, Hadoop/Hive is added advantage Experience on streaming systems like Kakfa. Experience on Kafka Streams is added advantage. Experience in MLOps is added advantage Experience in Data Engineering is added advantage Strong analytical thinking skills with an ability to creatively solve business problems, innovating new approaches where required Able to produce reasonable documents/narrative suggesting actionable insights Self-driven and results oriented Strong team player with ability to collaborate effectively across geographies/time zones Know More About Us here: Life at Target - https://india.target.com/ Benefits - https://india.target.com/life-at-target/workplace/benefits Culture- https://india.target.com/life-at-target/belonging
Posted 3 weeks ago
3.0 - 5.0 years
10 - 14 Lacs
Pune
Work from Office
: Job TitleGCP Data Engineer, AS LocationPune, India Corporate TitleAssociate Role Description An Engineer is responsible for designing and developing entire engineering solutions to accomplish business goals. Key responsibilities of this role include ensuring that solutions are well architected, with maintainability and ease of testing built in from the outset, and that they can be integrated successfully into the end-to-end business process flow. They will have gained significant experience through multiple implementations and have begun to develop both depth and breadth in several engineering competencies. They have extensive knowledge of design and architectural patterns.They will provide engineering thought leadership within their teams and will play a role in mentoring and coaching of less experienced engineers. What we'll offer you As part of our flexible scheme, here are just some of the benefits that youll enjoy Best in class leave policy Gender neutral parental leaves 100% reimbursement under childcare assistance benefit (gender neutral) Sponsorship for Industry relevant certifications and education Employee Assistance Program for you and your family members Comprehensive Hospitalization Insurance for you and your dependents Accident and Term life Insurance Complementary Health screening for 35 yrs. and above Your key responsibilities Design, develop and maintain data pipelines using Python and SQL programming language on GCP. Experience in Agile methodologies, ETL, ELT, Data movement and Data processing skills. Work with Cloud Composer to manage and process batch data jobs efficiently. Develop and optimize complex SQL queries for data analysis, extraction, and transformation. Develop and deploy google cloud services using Terraform. Implement CI CD pipeline using GitHub Action Consume and Hosting REST API using Python. Monitor and troubleshoot data pipelines, resolving any issues in a timely manner. Ensure team collaboration using Jira, Confluence, and other tools. Ability to quickly learn new any existing technologies Strong problem-solving skills. Write advanced SQL and Python scripts. Certification on Professional Google Cloud Data engineer will be an added advantage. Your skills and experience 6+ years of IT experience, as a hands-on technologist. Proficient in Python for data engineering. Proficient in SQL. Hands on experience on GCP Cloud Composer, Data Flow, Big Query, Cloud Function, Cloud Run and well to have GKE Hands on experience in REST API hosting and consumptions. Proficient in Terraform/ Hashicorp. Experienced in GitHub and Git Actions Experienced in CI-CD Experience in automating ETL testing using python and SQL. Good to have API knowledge Good to have Bit Bucket How we'll support you Training and development to help you excel in your career Coaching and support from experts in your team A culture of continuous learning to aid progression A range of flexible benefits that you can tailor to suit your needs
Posted 3 weeks ago
3.0 - 5.0 years
32 - 37 Lacs
Mumbai
Work from Office
: Job TitleLead Business Analyst, AVP Location: Mumbai, India Role Description As a BA you are expected to design and deliver on critical senior management dashboards and analytics using tools such as Tableau, Power BI etc. These management packs should enable management to make timely decisions for their respective businesses and create a sound foundation for the analytics. You will need to collaborate closely with senior business managers, data engineers and stakeholders from other teams to comprehend requirements and translate them into visually pleasing dashboards and reports. You will play a crucial role in analyzing business data and generating valuable insights for other strategic ad hoc exercises. What we'll offer you As part of our flexible scheme, here are just some of the benefits that youll enjoy Best in class leave policy Gender neutral parental leaves 100% reimbursement under childcare assistance benefit (gender neutral) Sponsorship for Industry relevant certifications and education Employee Assistance Program for you and your family members Comprehensive Hospitalization Insurance for you and your dependents Accident and Term life Insurance Complementary Health screening for 35 yrs. and above Your key responsibilities Collaborate with business user, managers to gather requirements, and comprehend business needs to design optimal solutions. Perform ad hoc data analysis as per business needs to generate reports, visualizations, and presentations helping strategic decision making. You will be responsible for sourcing information from multiple sources, build a robust data pipeline model. To be able work on large and complex data sets to produce useful insights. Perform audit checks ensuring integrity and accuracy across all spectrums before implementing findings. Ensure timely refresh to provide most updated information in dashboards/reports. Identifying opportunities for process improvements and optimization based on data insights. Communicate project status updates and recommendations. Your skills and experience Bachelors degree in computer science, IT, Business Administration or related field Minimum of 5 years of experience in visual reporting development, including hands-on development of analytics dashboards and working with complex data sets Minimum of 3 years of Tableau, power BI or any other BI tool. Excellent Microsoft Office skills including advanced Excel skills . Comprehensive understanding of data visualization best practices Experience with data analysis, modeling, and ETL processes is advantageous. Excellent knowledge of database concepts and extensive hands-on experience working with SQL Strong analytical, quantitative, problem solving and organizational skills. Attention to detail and ability to coordinate multiple tasks, set priorities and meet deadlines. Excellent communication and writing skills. How we'll support you Training and development to help you excel in your career Coaching and support from experts in your team A culture of continuous learning to aid progression A range of flexible benefits that you can tailor to suit your needs
Posted 3 weeks ago
4.0 - 6.0 years
12 - 16 Lacs
Chennai
Work from Office
Job Information Job Opening ID ZR_2441_JOB Date Opened 21/03/2025 Industry IT Services Job Type Work Experience 4-6 years Job Title Data engineer with Gen Ai City Chennai Province Tamil Nadu Country India Postal Code 600001 Number of Positions 1 We are seeking a skilled Data Engineer who can function as a Data Architect, designing scalable data pipelines, table structures, and ETL workflows. The ideal candidate will be responsible for recommending cost-effective and high-performance data architecture solutions, collaborating with cross-functional teams to enable efficient analytics and data science initiatives. Key Responsibilities: Design and implement ETL workflows, data pipelines, and table structures to support business analytics and data science. Optimize data storage, retrieval, and processing for cost-efficiency and high performance. Collaborate with Analytics and Data Science teams for feature engineering and KPI computations. Develop and maintain data models for structured and unstructured data. Ensure data quality, integrity, and security across systems. Work with cloud platforms (AWS/ Azure/ GCP) to design and manage scalable data architectures. Technical Skills Required: SQL & Python Strong proficiency in writing optimized queries and scripts. PySpark Hands-on experience with distributed data processing. Cloud Technologies (AWS/ Azure/ GCP) Experience with cloud-based data solutions. Spark & Airflow Experience with big data frameworks and workflow orchestration. Gen AI (Preferred) Exposure to generative AI applications is a plus. Preferred Qualifications: Experience in data modeling, ETL optimization, and performance tuning. Strong problem-solving skills and ability to work in a fast-paced environment. Prior experience working with large-scale data processing. check(event) ; career-website-detail-template-2 => apply(record.id,meta)" mousedown="lyte-button => check(event)" final-style="background-color:#2B39C2;border-color:#2B39C2;color:white;" final-class="lyte-button lyteBackgroundColorBtn lyteSuccess" lyte-rendered=""> I'm interested
Posted 3 weeks ago
7.0 - 9.0 years
3 - 7 Lacs
Bengaluru
Work from Office
Job Information Job Opening ID ZR_2162_JOB Date Opened 15/03/2024 Industry Technology Job Type Work Experience 7-9 years Job Title Sr Data Engineer City Bangalore Province Karnataka Country India Postal Code 560004 Number of Positions 5 Mandatory Skills: Microsoft Azure, Hadoop, Spark, Databricks, Airflow, Kafka, Py spark RequirmentsExperience working with distributed technology tools for developing Batch and Streaming pipelines using. SQL, Spark, Python Airflow Scala Kafka Experience in Cloud Computing, e.g., AWS, GCP, Azure, etc. Able to quickly pick up new programming languages, technologies, and frameworks. Strong skills building positive relationships across Product and Engineering. Able to influence and communicate effectively, both verbally and written, with team members and business stakeholders Experience with creating/ configuring Jenkins pipeline for smooth CI/CD process for Managed Spark jobs, build Docker images, etc. Working knowledge of Data warehousing, Data modelling, Governance and Data Architecture Experience working with Data platforms, including EMR, Airflow, Data bricks (Data Engineering & Delta Lake components) Experience working in Agile and Scrum development process. Experience in EMR/ EC2, Data bricks etc. Experience working with Data warehousing tools, including SQL database, Presto, and Snowflake Experience architecting data product in Streaming, Server less and Microservices Architecture and platform. check(event) ; career-website-detail-template-2 => apply(record.id,meta)" mousedown="lyte-button => check(event)" final-style="background-color:#2B39C2;border-color:#2B39C2;color:white;" final-class="lyte-button lyteBackgroundColorBtn lyteSuccess" lyte-rendered=""> I'm interested
Posted 3 weeks ago
6.0 - 10.0 years
3 - 7 Lacs
Chennai
Work from Office
Job Information Job Opening ID ZR_2199_JOB Date Opened 15/04/2024 Industry Technology Job Type Work Experience 6-10 years Job Title Sr Data Engineer City Chennai Province Tamil Nadu Country India Postal Code 600004 Number of Positions 4 Strong experience in Python Good experience in Databricks Experience working in AWS/Azure Cloud Platform. Experience working with REST APIs and services, messaging and event technologies. Experience with ETL or building Data Pipeline tools Experience with streaming platforms such as Kafka. Demonstrated experience working with large and complex data sets. Ability to document data pipeline architecture and design Experience in Airflow is nice to have To build complex Deltalake check(event) ; career-website-detail-template-2 => apply(record.id,meta)" mousedown="lyte-button => check(event)" final-style="background-color:#2B39C2;border-color:#2B39C2;color:white;" final-class="lyte-button lyteBackgroundColorBtn lyteSuccess" lyte-rendered=""> I'm interested
Posted 3 weeks ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
20312 Jobs | Dublin
Wipro
11977 Jobs | Bengaluru
EY
8165 Jobs | London
Accenture in India
6667 Jobs | Dublin 2
Uplers
6464 Jobs | Ahmedabad
Amazon
6352 Jobs | Seattle,WA
Oracle
5993 Jobs | Redwood City
IBM
5803 Jobs | Armonk
Capgemini
3897 Jobs | Paris,France
Tata Consultancy Services
3776 Jobs | Thane