Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
6.0 - 11.0 years
25 - 35 Lacs
Valsad
Remote
Job Timing : Monday-Friday : 3:00 PM to 12:00 AM ( 8:00 PM to 9:00 PM Dinner Break ) Saturday : 9:30 AM to 2:30 PM ( 1PM to 1:30 PM Lunch Break ) Job Description: As a Data Scientist specializing in AI and Machine Learning, you will play a key role in developing and deploying state-of-the-art machine learning models. You will work closely with cross-functional teams to create solutions leveraging AI technologies, including OpenAI models, Google Gemini, Copilot, and other cutting-edge AI tools. Key Responsibilities: Design, develop, and implement advanced AI and machine learning models focusing on generative AI and NLP technologies. Work with large datasets, applying statistical and machine learning techniques to extract insights and develop predictive models. Collaborate with engineering teams to integrate models into production systems. Apply best practices for model training, tuning, evaluation, and optimization. Develop and maintain pipelines for data ingestion, feature engineering, and model deployment. Leverage tools like OpenAI's GPT models, Google Gemini, Microsoft Copilot, and other available platforms for AI-driven solutions. Build and experiment with large language models, recommendation systems, computer vision models, and reinforcement learning systems. Continuously stay up-to-date with the latest AI/ML technologies and research trends. Qualifications: Required: Proven experience as a Data Scientist, Machine Learning Engineer, or similar role. Strong expertise in building and deploying machine learning models across various use cases. In-depth experience with AI frameworks and tools such as OpenAI (e.g., GPT models), Google Gemini, Microsoft Copilot, and others. Proficiency in machine learning techniques, including supervised/unsupervised learning, reinforcement learning, and deep learning. Expertise in model training, fine-tuning, and hyperparameter optimization. Strong programming skills in Python, R, or similar languages. Solid understanding of model evaluation metrics and performance tuning. Familiarity with cloud platforms (AWS, Azure, Google Cloud) and ML model deployment tools like TensorFlow, PyTorch, and Keras. Experience with MLOps tools such as MLflow, Kubeflow, and DataRobot. Strong experience with data wrangling, feature engineering, and preprocessing techniques. Excellent problem-solving skills and the ability to communicate complex ideas to non-technical stakeholders. Preferred: PhD or Master's degree in Computer Science, Data Science, Artificial Intelligence, or a related field. Experience with large-scale data processing frameworks (Hadoop, Spark, Databricks). Expertise in Natural Language Processing (NLP) techniques and frameworks like Hugging Face, BERT, T5, etc. Familiarity with deploying AI solutions on cloud services, including AWS SageMaker, Azure ML, or Google AI Platform. Experience with distributed machine learning techniques, multi-GPU setups, and optimizing large-scale models. Knowledge of reinforcement learning (RL) algorithms and practical application experience. Familiarity with AI interpretability tools such as SHAP, LIME, and Fairness Indicators. Proficiency in using collaboration tools such as Jupyter Notebooks, Git, and Docker for version control and deployment. Additional Tools & Technologies (Preferred Experience): Natural Language Processing (NLP): OpenAI GPT, BERT, T5, spaCy, NLTK, Hugging Face Machine Learning Frameworks: TensorFlow, PyTorch, Keras, Scikit-Learn Big Data Processing: Hadoop, Spark, Databricks, Dask Cloud Platforms: AWS SageMaker, Google AI Platform, Microsoft Azure ML, IBM Watson Automation & Deployment: Docker, Kubernetes, Terraform, Jenkins, CircleCI, GitLab CI/CD Visualization & Analysis: Tableau, Power BI, Plotly, Matplotlib, Seaborn, NumPy, Pandas Database : RDBMS, NoSQL Version Control: Git, GitHub, GitLab Why Join Us: Innovative Projects: Work on groundbreaking AI solutions and cutting-edge technology. Collaborative Team: Join a passionate, highly skilled, and collaborative team that values creativity and new ideas. Growth Opportunities: Develop your career in an expanding AI-focused company with continuous learning opportunities. Competitive Compensation: We offer a competitive salary and benefits package.
Posted 1 week ago
5.0 - 7.0 years
19 Lacs
Kolkata, Mumbai, Hyderabad
Work from Office
Reporting to Global Head of Data Operations Role purpose As a Data Engineer, you will be a driving force towards data engineering excellence. Working with other data engineers, analysts, and the architecture function, youll be involved in the building out of a modern data platform using a number of cutting-edge technologies, and in a multi cloud environment, Youll get the opportunity to spread your knowledge and skills across multiple areas, with involvement in a range of different functional areas. As the business grows, we want our staff to grow with us, so therell be plenty of opportunity to learn and upskill in areas such as data pipelines, data integrations, data preparation, data models, analytical and reporting marts. Also, whilst work is often following business requirements and design concepts, youll play a huge part in the continuous development and maturing of design patterns and automation process for others to follow. Accountabilities and main responsibilities In this role, you will be delivering solutions and patterns through Agile methodologies as part of a squad. Youll be collaborating with customers, partners and peers, and will help to identify data requirements. Wed also rely on you to: Help break down large problems into smaller iterative steps Contribute to defining the prioritisation of your squads backlog Build out the modern data platform (data pipelines, data integrations, data preparation, data models, analytical and reporting marts) based on business requirements using agreed design patterns Help determine the most appropriate tool, method and design pattern in order to satisfy the requirement Proactively suggest improvements where they see issues Learn how to prepare our data in order to surface it for use within APIs Learn how to Document, support, manage and maintain the modern data platform built within your squad Learn how to provide guidance and training to downstream consumers of data on how best to use the data in our platform Learn how to support and build new data APIs Contribute to evangelising and educating within Sanne about the better use and value of data Comply with all Sanne policies Any other duties in the scope of the role that the company requires. Qualifications and skills Technical Skills: Data Warehousing and Data Modelling Data Lakes (AWS Lake Formation, Azure Data Lake) Cloud Data Warehouses (AWS Redshift, Azure Synapse, Snowflake) ETL/ELT/ Pipeline tools (AWS Glue, Azure Data Factory, FiveTran, Stitch) Data Message Bus/Pub Sub systems (AWS SNS & SQS Azure ASQ, Kafka, RabbitMQ) Data Programming languages (SQL, Python, Scala, Java) Cloud Workflow Service (AWS Step Functions, Azure Logic Apps, Camuda) Interactive Query Services (AWS Athena, Azure DL Analytics) Event and schedule management (AWS Lambda Functions, Azure Functions) Traditional Microsoft BI Stack (SQLServer, SSIS, SSAS, SSRS) Reporting and visualisation tools (Power BI, QuickSight, Mode) NoSQL & Graph DBs (AWS Neptune, Azure Cosmos, Neo4j) NoSQL & Graph DBs (AWS Neptune, Azure Cosmos, Neo4j) (Desirable) API Management (Desirable) Core Skills: Excellent communication and interpersonal skills Critical Thinking and research capabilities Strong problem-solving skills Ability to plan, and manage your own work loads Work well on own initiative as well as part of a bigger team Working knowledge of Agile Software Development Lifecycles.
Posted 1 week ago
0.0 - 2.0 years
5 - 15 Lacs
Bengaluru
Work from Office
Job Title: Data Engineer Experience: 12 to 20 months Work Mode: Work from Office Locations: Bangalore, Chennai, Kolkata, Pune, Gurgaon About Tredence Tredence focuses on last-mile delivery of powerful insights into profitable actions by uniting its strengths in business analytics, data science, and software engineering. The largest companies across industries are engaging with us and deploying their prediction and optimization solutions at scale. Headquartered in the San Francisco Bay Area, we serve clients in the US, Canada, Europe, and Southeast Asia. Tredence is an equal opportunity employer. We celebrate and support diversity and are committed to creating an inclusive environment for all employees. Visit our website for more details: Role Overview We are seeking a driven and hands-on Data Engineer with 12 to 20 months of experience to support modern data pipeline development and transformation initiatives. The role requires solid technical skills in SQL , Python , and PySpark , with exposure to cloud platforms such as Azure or GCP . As a Data Engineer at Tredence , you will work on ingesting, processing, and modeling large-scale data, implementing scalable data pipelines, and applying foundational data warehousing principles. This role also includes direct collaboration with cross-functional teams and client stakeholders. Key Responsibilities Develop robust and scalable data pipelines using PySpark in cloud platforms like Azure Databricks or GCP Dataflow . Write optimized SQL queries for data transformation, analysis, and validation. Implement and support data warehouse models and principles, including: Fact and Dimension modeling Star and Snowflake schemas Slowly Changing Dimensions (SCD) Change Data Capture (CDC) Medallion Architecture Monitor, troubleshoot, and improve pipeline performance and data quality. Work with teams across analytics, business, and IT functions to deliver data-driven solutions. Communicate technical updates and contribute to sprint-level delivery. Mandatory Skills Strong hands-on experience with SQL and Python Working knowledge of PySpark for data transformation Exposure to at least one cloud platform: Azure or GCP . Good understanding of data engineering and warehousing fundamentals Excellent debugging and problem-solving skills Strong written and verbal communication skills Preferred Skills Experience working with Databricks Community Edition or enterprise version Familiarity with data orchestration tools like Airflow or Azure Data Factory Exposure to CI/CD processes and version control (e.g., Git) Understanding of Agile/Scrum methodology and collaborative development Basic knowledge of handling structured and semi-structured data (JSON, Parquet, etc.) Required Skills Azure Databricks / GCP Python SQL Pyspark
Posted 1 week ago
5.0 - 10.0 years
15 - 27 Lacs
Bengaluru
Work from Office
ob Summary: We are seeking a skilled Azure Databricks Developer with strong Terraform expertise to join our data engineering or cloud team. This role involves building, automating, and maintaining scalable data pipelines and infrastructure in the Azure cloud environment using Databricks and Infrastructure as Code (IaC) practices. The ideal candidate has hands-on experience with data processing in Databricks and cloud provisioning using Terraform. Key Responsibilities: Develop and optimize data pipelines using Azure Databricks (Spark, Delta Lake, notebooks, jobs) Design and automate infrastructure provisioning on Azure using Terraform Collaborate with data engineers, analysts, and cloud architects to integrate Databricks with other Azure services (e.g., Data Lake, Synapse, Key Vault) Maintain CI/CD pipelines for deploying Databricks and Terraform configurations Apply best practices for security, scalability, cost optimization , and performance Monitor and troubleshoot jobs and infrastructure components Document architecture, processes, and configuration standards Required Skills & Experience: 5+ years of experience in Azure Databricks , including PySpark, notebooks, cluster management, Delta Lake Strong hands-on experience in Terraform for managing cloud infrastructure (especially Azure) Proficiency in Python and SQL Experience with Azure services : Azure Data Lake, Azure Data Factory, Azure Key Vault, Azure DevOps Familiarity with CI/CD pipelines and version control (e.g., Git) Good understanding of data engineering concepts and cloud-native architecture Preferred Qualifications: Azure certifications (e.g., DP-203 , AZ-104 , or AZ-400 ) Knowledge of Databricks CLI , REST API, and workspace automation Experience with monitoring and alerting for data pipelines and cloud resources Understanding of cost management for Databricks and Azure services Role & responsibilities
Posted 1 week ago
7.0 - 10.0 years
20 - 30 Lacs
Hyderabad, Chennai
Work from Office
Prof in designing and delivering data pipelines in Cloud Data Warehouses (e.g., Snowflake, Redshift), using various ETL/ELT tools such as Matillion, dbt, Striim, etc. Solid understanding of database systems (relational / NoSQL) ,data modeling tech Required Candidate profile looking for candidates with strong experience in data architecture Potential companies: Tiger Analytics, Tredence, Quantiphi, Data Engineering Group within Infosys/TCS/Cognizant, Deloitte Consulting Perks and benefits 5 working days - Onsite
Posted 1 week ago
8.0 - 13.0 years
17 - 20 Lacs
Bengaluru
Work from Office
Project description We are seeking an experienced Senior Project Manager with a strong background in delivering data engineering and Python-based development projects. In this role, you will manage cross-functional teams and lead Agile delivery for high-impact, cloud-based data initiatives. You'll work closely with data engineers, scientists, architects, and business stakeholders to ensure projects are delivered on time, within scope, and aligned with strategic objectives. The ideal candidate combines technical fluency, strong leadership, and Agile delivery expertise in data-centric environments. Responsibilities Lead and manage data engineering and Python-based development projects, ensuring timely delivery and alignment with business goals. Work closely with data engineers, data scientists, architects, and product owners to gather requirements and define project scope. Translate complex technical requirements into actionable project plans and user stories. Oversee sprint planning, backlog grooming, daily stand-ups, and retrospectives in Agile/Scrum environments. Ensure best practices in Python coding, data pipeline design, and cloud-based data architecture are followed. Identify and mitigate risks, manage dependencies, and escalate issues when needed. Own stakeholder communications, reporting, and documentation of all project artifacts. Track KPIs and delivery metrics to ensure accountability and continuous improvement. Skills Must have ExperienceMinimum 8+ years of project management experience, including 3+ years managing data and Python-based development projects. Agile ExpertiseStrong experience delivering projects in Agile/Scrum environments with distributed or hybrid teams. Technical Fluency: Solid understanding of Python, data pipelines, and ETL/ELT workflows. Familiarity with cloud platforms such as AWS, Azure, or GCP. Exposure to tools like Airflow, dbt, Spark, Databricks, or Snowflake is a plus. ToolsProficiency with JIRA, Confluence, Git, and project dashboards (e.g., Power BI, Tableau). Soft Skills: Strong communication, stakeholder management, and leadership skills. Ability to translate between technical and non-technical audiences. Skilled in risk management, prioritization, and delivery tracking. Nice to have N/A OtherLanguagesEnglishC1 Advanced SenioritySenior
Posted 1 week ago
5.0 - 10.0 years
7 - 12 Lacs
Lucknow
Work from Office
Functional Area : Data Engineering / SAP BW/4HANA / Automation Qualification : Bachelors or Masters Degree in Computer Science, Information Technology, or related field Responsibilities : - Design, develop, and maintain high-performance data solutions within the SAP BW/4HANA environment, ensuring data quality and integrity. - Utilize ABAP and AMDP to develop efficient data extraction, transformation, and loading (ETL) processes within SAP BW/4HANA. - Optimize existing data models, data flows, and query performance in SAP BW/4HANA. - Implement and manage data automation workflows using tools such as Automic or similar scheduling platforms. - Support and troubleshoot data pipelines, which may include integration with Hadoop-based systems and other data sources. - Collaborate effectively with international team members across different time zones, participating in meetings and sharing knowledge. - Actively participate in all phases of the project lifecycle, from requirements gathering and design to development, testing, and deployment, utilizing tools like Micro Focus ALM and MF Service Manager for project tracking and issue management. - Write clear and concise technical documentation for developed data solutions and processes. - Work with File Transfer Protocols (e.g., SFTP, FTP) to manage data exchange with various systems. - Utilize collaboration tools such as Jira and Confluence for task management, knowledge sharing, and documentation. - Ensure adherence to data governance policies and standards. - Proactively identify and resolve performance bottlenecks and data quality issues within the SAP BW/4HANA system. - Stay up-to-date with the latest advancements in SAP BW/4HANA, data engineering technologies, and automation tools. - Contribute to the continuous improvement of our data engineering processes and methodologies. Technical Skills : - SAP BW/4HANA : Extensive hands-on experience in designing, developing, and administering SAP BW/4HANA systems, including data modeling (LSA++, virtual data models), data extraction from various sources (SAP and non-SAP), transformations, and loading processes. - ABAP for BW/4HANA : Strong proficiency in ABAP programming, specifically for developing routines, transformations, and other custom logic within SAP BW/4HANA. - AMDP : Solid experience in developing and optimizing data transformations using ABAP Managed Database Procedures (AMDP) for performance enhancement. - Data Engineering Principles : Strong understanding of data warehousing concepts, data modeling techniques, ETL/ELT processes, and data quality principles. - Automation Tools : Hands-on experience with automation tools, preferably Automic, for scheduling and managing data workflows. - Hadoop (Beneficial) : Familiarity with Hadoop ecosystems and technologies (e.g., HDFS, Hive, Spark) and experience integrating with SAP BW/4HANA is a plus. - File Transfer Protocols : Experience working with various file transfer protocols (e.g., SFTP, FTP, HTTPS) for data exchange. - SQL : Strong SQL skills for data querying and analysis. - SAP BW Query Designer and BEx Analyzer (Beneficial) : Experience with SAP BW Query Designer and BEx Analyzer for creating and troubleshooting reports is a plus. - SAP HANA (Underlying Database) : Good understanding of SAP HANA as the underlying database for SAP BW/4HANA. Functional Skills : - Strong analytical and problem-solving skills with the ability to translate business requirements into technical data solutions. - Excellent communication skills, both written and verbal, with the ability to effectively communicate with technical and non-technical stakeholders in an international setting. - Proven ability to collaborate effectively within a geographically distributed team. - Experience working with project management tools like Micro Focus ALM and MF Service Manager for issue tracking and project lifecycle management. - Familiarity with collaboration tools such as Jira and Confluence. - Ability to work independently and manage tasks effectively in a remote environment. - Adaptability to different cultural norms and communication styles within a global team. Qualifications : - Bachelors or Masters Degree in Computer Science, Information Technology, Data Science, or a related field. - 5-10 years of professional experience as a Data Engineer with a focus on SAP BW/4HANA. - Proven track record of successful implementation and support of SAP BW/4HANA data solutions. - Strong understanding of data warehousing principles and best practices. - Excellent communication and collaboration skills in an international environment. Bonus Points : - SAP BW/4HANA certification. - Experience with other data warehousing technologies. - Knowledge of SAP Analytics Cloud (SAC) and its integration with SAP BW/4HANA. - Experience with agile development methodologies
Posted 1 week ago
5.0 - 10.0 years
7 - 12 Lacs
Ludhiana
Work from Office
Functional Area : Data Engineering / SAP BW/4HANA / Automation Qualification : Bachelors or Masters Degree in Computer Science, Information Technology, or related field Responsibilities : - Design, develop, and maintain high-performance data solutions within the SAP BW/4HANA environment, ensuring data quality and integrity. - Utilize ABAP and AMDP to develop efficient data extraction, transformation, and loading (ETL) processes within SAP BW/4HANA. - Optimize existing data models, data flows, and query performance in SAP BW/4HANA. - Implement and manage data automation workflows using tools such as Automic or similar scheduling platforms. - Support and troubleshoot data pipelines, which may include integration with Hadoop-based systems and other data sources. - Collaborate effectively with international team members across different time zones, participating in meetings and sharing knowledge. - Actively participate in all phases of the project lifecycle, from requirements gathering and design to development, testing, and deployment, utilizing tools like Micro Focus ALM and MF Service Manager for project tracking and issue management. - Write clear and concise technical documentation for developed data solutions and processes. - Work with File Transfer Protocols (e.g., SFTP, FTP) to manage data exchange with various systems. - Utilize collaboration tools such as Jira and Confluence for task management, knowledge sharing, and documentation. - Ensure adherence to data governance policies and standards. - Proactively identify and resolve performance bottlenecks and data quality issues within the SAP BW/4HANA system. - Stay up-to-date with the latest advancements in SAP BW/4HANA, data engineering technologies, and automation tools. - Contribute to the continuous improvement of our data engineering processes and methodologies. Technical Skills : - SAP BW/4HANA : Extensive hands-on experience in designing, developing, and administering SAP BW/4HANA systems, including data modeling (LSA++, virtual data models), data extraction from various sources (SAP and non-SAP), transformations, and loading processes. - ABAP for BW/4HANA : Strong proficiency in ABAP programming, specifically for developing routines, transformations, and other custom logic within SAP BW/4HANA. - AMDP : Solid experience in developing and optimizing data transformations using ABAP Managed Database Procedures (AMDP) for performance enhancement. - Data Engineering Principles : Strong understanding of data warehousing concepts, data modeling techniques, ETL/ELT processes, and data quality principles. - Automation Tools : Hands-on experience with automation tools, preferably Automic, for scheduling and managing data workflows. - Hadoop (Beneficial) : Familiarity with Hadoop ecosystems and technologies (e.g., HDFS, Hive, Spark) and experience integrating with SAP BW/4HANA is a plus. - File Transfer Protocols : Experience working with various file transfer protocols (e.g., SFTP, FTP, HTTPS) for data exchange. - SQL : Strong SQL skills for data querying and analysis. - SAP BW Query Designer and BEx Analyzer (Beneficial) : Experience with SAP BW Query Designer and BEx Analyzer for creating and troubleshooting reports is a plus. - SAP HANA (Underlying Database) : Good understanding of SAP HANA as the underlying database for SAP BW/4HANA. Functional Skills : - Strong analytical and problem-solving skills with the ability to translate business requirements into technical data solutions. - Excellent communication skills, both written and verbal, with the ability to effectively communicate with technical and non-technical stakeholders in an international setting. - Proven ability to collaborate effectively within a geographically distributed team. - Experience working with project management tools like Micro Focus ALM and MF Service Manager for issue tracking and project lifecycle management. - Familiarity with collaboration tools such as Jira and Confluence. - Ability to work independently and manage tasks effectively in a remote environment. - Adaptability to different cultural norms and communication styles within a global team. Qualifications : - Bachelors or Masters Degree in Computer Science, Information Technology, Data Science, or a related field. - 5-10 years of professional experience as a Data Engineer with a focus on SAP BW/4HANA. - Proven track record of successful implementation and support of SAP BW/4HANA data solutions. - Strong understanding of data warehousing principles and best practices. - Excellent communication and collaboration skills in an international environment. Bonus Points : - SAP BW/4HANA certification. - Experience with other data warehousing technologies. - Knowledge of SAP Analytics Cloud (SAC) and its integration with SAP BW/4HANA. - Experience with agile development methodologies
Posted 1 week ago
5.0 - 10.0 years
7 - 12 Lacs
Hyderabad
Work from Office
Functional Area : Data Engineering / SAP BW/4HANA / Automation Qualification : Bachelors or Masters Degree in Computer Science, Information Technology, or related field Responsibilities : - Design, develop, and maintain high-performance data solutions within the SAP BW/4HANA environment, ensuring data quality and integrity. - Utilize ABAP and AMDP to develop efficient data extraction, transformation, and loading (ETL) processes within SAP BW/4HANA. - Optimize existing data models, data flows, and query performance in SAP BW/4HANA. - Implement and manage data automation workflows using tools such as Automic or similar scheduling platforms. - Support and troubleshoot data pipelines, which may include integration with Hadoop-based systems and other data sources. - Collaborate effectively with international team members across different time zones, participating in meetings and sharing knowledge. - Actively participate in all phases of the project lifecycle, from requirements gathering and design to development, testing, and deployment, utilizing tools like Micro Focus ALM and MF Service Manager for project tracking and issue management. - Write clear and concise technical documentation for developed data solutions and processes. - Work with File Transfer Protocols (e.g., SFTP, FTP) to manage data exchange with various systems. - Utilize collaboration tools such as Jira and Confluence for task management, knowledge sharing, and documentation. - Ensure adherence to data governance policies and standards. - Proactively identify and resolve performance bottlenecks and data quality issues within the SAP BW/4HANA system. - Stay up-to-date with the latest advancements in SAP BW/4HANA, data engineering technologies, and automation tools. - Contribute to the continuous improvement of our data engineering processes and methodologies. Technical Skills : - SAP BW/4HANA : Extensive hands-on experience in designing, developing, and administering SAP BW/4HANA systems, including data modeling (LSA++, virtual data models), data extraction from various sources (SAP and non-SAP), transformations, and loading processes. - ABAP for BW/4HANA : Strong proficiency in ABAP programming, specifically for developing routines, transformations, and other custom logic within SAP BW/4HANA. - AMDP : Solid experience in developing and optimizing data transformations using ABAP Managed Database Procedures (AMDP) for performance enhancement. - Data Engineering Principles : Strong understanding of data warehousing concepts, data modeling techniques, ETL/ELT processes, and data quality principles. - Automation Tools : Hands-on experience with automation tools, preferably Automic, for scheduling and managing data workflows. - Hadoop (Beneficial) : Familiarity with Hadoop ecosystems and technologies (e.g., HDFS, Hive, Spark) and experience integrating with SAP BW/4HANA is a plus. - File Transfer Protocols : Experience working with various file transfer protocols (e.g., SFTP, FTP, HTTPS) for data exchange. - SQL : Strong SQL skills for data querying and analysis. - SAP BW Query Designer and BEx Analyzer (Beneficial) : Experience with SAP BW Query Designer and BEx Analyzer for creating and troubleshooting reports is a plus. - SAP HANA (Underlying Database) : Good understanding of SAP HANA as the underlying database for SAP BW/4HANA. Functional Skills : - Strong analytical and problem-solving skills with the ability to translate business requirements into technical data solutions. - Excellent communication skills, both written and verbal, with the ability to effectively communicate with technical and non-technical stakeholders in an international setting. - Proven ability to collaborate effectively within a geographically distributed team. - Experience working with project management tools like Micro Focus ALM and MF Service Manager for issue tracking and project lifecycle management. - Familiarity with collaboration tools such as Jira and Confluence. - Ability to work independently and manage tasks effectively in a remote environment. - Adaptability to different cultural norms and communication styles within a global team. Qualifications : - Bachelors or Masters Degree in Computer Science, Information Technology, Data Science, or a related field. - 5-10 years of professional experience as a Data Engineer with a focus on SAP BW/4HANA. - Proven track record of successful implementation and support of SAP BW/4HANA data solutions. - Strong understanding of data warehousing principles and best practices. - Excellent communication and collaboration skills in an international environment. Bonus Points : - SAP BW/4HANA certification. - Experience with other data warehousing technologies. - Knowledge of SAP Analytics Cloud (SAC) and its integration with SAP BW/4HANA. - Experience with agile development methodologies
Posted 1 week ago
6.0 - 9.0 years
8 - 11 Lacs
Chennai
Work from Office
About the job : Role : Microsoft Fabric Data Engineer Experience : 6+ years as Azure Data Engineer including at least 1 E2E Implementation in Microsoft Fabric. Responsibilities : - Lead the design and implementation of Microsoft Fabric-centric data platforms and data warehouses. - Develop and optimize ETL/ELT processes within the Microsoft Azure ecosystem, effectively utilizing relevant Fabric solutions. - Ensure data integrity, quality, and governance throughout Microsoft Fabric environment. - Collaborate with stakeholders to translate business needs into actionable data solutions. - Troubleshoot and optimize existing Fabric implementations for enhanced performance. Skills : - Solid foundational knowledge in data warehousing, ETL/ELT processes, and data modeling (dimensional, normalized). - Design and implement scalable and efficient data pipelines using Data Factory (Data Pipeline, Data Flow Gen 2 etc) in Fabric, Pyspark notebooks, Spark SQL, and Python. This includes data ingestion, data transformation, and data loading processes. - Experience ingesting data from SAP systems like SAP ECC/S4HANA/SAP BW etc will be a plus. - Nice to have ability to develop dashboards or reports using tools like Power BI. Coding Fluency : - Proficiency in SQL, Python, or other languages for data scripting, transformation, and automation.
Posted 1 week ago
7.0 - 12.0 years
14 - 18 Lacs
Bengaluru
Work from Office
As a Fortune 50 company with more than 400,000 team members worldwide, Target is an iconic brand and one of America's leading retailers. Joining Target means promoting a culture of mutual care and respect and striving to make the most meaningful and positive impact. Becoming a Target team member means joining a community that values different voices and lifts each other up. Here, we believe your unique perspective is important, and you'll build relationships by being authentic and respectful. Overview about TII: At Target, we have a timeless purpose and a proven strategy. And that hasnt happened by accident. Some of the best minds from different backgrounds come together at Target to redefine retail in an inclusive learning environment that values people and delivers world-class outcomes. That winning formula is especially apparent in Bengaluru, where Target in India operates as a fully integrated part of Targets global team and has more than 4,000 team members supporting the companys global strategy and operations. Team Overview: Every time a guest enters a Target store or browses Target.com nor the app, they experience the impact of Targets investments in technology and innovation. Were the technologists behind one of the most loved retail brands, delivering joy to millions of our guests, team members, and communities. Join our global in-house technology team of more than 5,000 of engineers, data scientists, architects and product managers striving to make Target the most convenient, safe and joyful place to shop. We use agile practices and leverage open-source software to adapt and build best-in-class technology for our team members and guestsand we do so with a focus on diversity and inclusion, experimentation and continuous learning. Position Overview: As a Lead Data Engineer , you will serve as the technical anchor for the engineering team, responsible for designing and developing scalable, high-performance data solutions . You will own and drive data architecture that supports both functional and non-functional business needs, ensuring reliability, efficiency, and scalability .Your expertise in big data technologies, distributed systems, and cloud platforms will help shape the engineering roadmap and best practices for data processing, analytics, and real-time data serving . You will play a key role in architecting and optimizing data pipelines using Hadoop, Spark, Scala/Java, and cloud technologies to support enterprise-wide data initiatives.Additionally, experience with API development for serving low-latency data and Customer Data Platforms (CDP) will be a strong plus. Key Responsibilities: Architect and build scalable, high-performance data pipelines and distributed data processing solutions using Hadoop, Spark, Scala/Java, and cloud platforms (AWS/GCP/Azure) . Design and implement real-time and batch data processing solutions , ensuring data is efficiently processed and made available for analytical and operational use. Develop APIs and data services to expose low-latency, high-throughput data for downstream applications, enabling real-time decision-making. Optimize and enhance data models, workflows, and processing frameworks to improve performance, scalability, and cost-efficiency. Drive data governance, security, and compliance best practices. Collaborate with data scientists, product teams, and business stakeholders to understand requirements and deliver data-driven solutions . Lead the design, implementation, and lifecycle management of data services and solutions. Stay up to date with emerging technologies and drive adoption of best practices in big data engineering, cloud computing, and API development . Provide technical leadership and mentorship to engineering teams, promoting best practices in data engineering and API design . About You: 7+ years of experience in data engineering, software development, or distributed systems. Expertise in big data technologies such as Hadoop, Spark, and distributed processing frameworks. Strong programming skills in Scala and/or Java (Python is a plus). Experience with cloud platforms (AWS, GCP, or Azure) and their data ecosystem (e.g., S3, BigQuery, Databricks, EMR, Snowflake, etc.). Proficiency in API development using REST, GraphQL, or gRPC to serve real-time and batch data. Experience with real-time and streaming data architectures (Kafka, Flink, Kinesis, etc.). Strong knowledge of data modeling, ETL pipeline design, and performance optimization . Understanding of data governance, security, and compliance in large-scale data environments. Experience with Customer Data Platforms (CDP) or customer-centric data processing is a strong plus. Strong problem-solving skills and ability to work in complex, unstructured environments . Excellent communication and collaboration skills, with experience working in cross-functional teams . Why Join Us Work with cutting-edge big data, API, and cloud technologies in a fast-paced, collaborative environment. Influence and shape the future of data architecture and real-time data services at Target. Solve high-impact business problems using scalable, low-latency data solutions . Be part of a culture that values innovation, learning, and growth . Life at Target- https://india.target.com/ Benefits- https://india.target.com/life-at-target/workplace/benefits Culture- https://india.target.com/life-at-target/belonging
Posted 1 week ago
2.0 - 6.0 years
5 - 9 Lacs
Pune
Work from Office
Data Engineer1 Common Skils - SQL, GCP BQ, ETL pipelines using Pythin/Airflow,Experience on Spark/Hive/HDFS,Data modeling for Data conversion Resources (4) Prior experience working on a conv/migration HR project is additional skill needed along with above mentioned skills Common Skils - SQL, GCP BQ, ETL pipelines using Pythin/Airflow, Experience on Spark/Hive/HDFS, Data modeling for Data conversion Resources (4) Prior experience working on a conv/migration HR project is additional skill needed along with above mentioned skills Data Engineer - Knows HR Knowledge , all other requirement from Functional Area given by UBER Customer Name Customer Nameuber
Posted 1 week ago
8.0 - 10.0 years
7 - 11 Lacs
Hyderabad, Pune
Work from Office
Sr Data Engineer1 We are looking for a highly skilled Senior Data Engineer with strong expertise in DataWarehousing & Analytics to join our team. The ideal candidate will have extensive experiencein designing and managing data solutions, advanced SQL proficiency, and hands-on expertisein Python.Key ResponsibilitiesDesign, develop, and maintain scalable data warehouse solutions. Write and optimize complex SQL queries for data extraction, transformation, andreporting. Develop and automate data pipelines using Python. Work with AWS cloud services for data storage, processing, and analytics. Collaborate with cross-functional teams to provide data-driven insights and solutions. Ensure data integrity, security, and performance optimization. Work in UK shift hours to align with global stakeholders.Required Skills & Experience8-10 years of experience in Data Warehousing & Analytics. Strong proficiency in writing complex SQL queries with deep understanding of queryoptimization, stored procedures, and indexing. Hands-on experience with Python for data processing and automation. Experience working with AWS cloud services. Ability to work independently and collaborate with teams across different time zones.Good to HaveExperience in the Finance domain and understanding of financial data structures. Hands-on experience with reporting tools like Power BI or Tableau.
Posted 1 week ago
3.0 - 8.0 years
5 - 13 Lacs
Mumbai Suburban, Navi Mumbai, Mumbai (All Areas)
Work from Office
Design implement Python AI/ML/Gen AI models algorithms &app Coordinate data scientists to translate their ideas into working solutions Apply ML techniques to explore & analyze data patterns insights Supp deployment of models to production env Required Candidate profile Strong proficiency in Python including libraries Exp with machine learning algorithms & tech Understanding of AI concepts neural networks Exp with one of the cloud platforms cloud-based machine Perks and benefits 10% additional variable on top of fixed +mediclaim
Posted 1 week ago
3.0 - 8.0 years
20 - 25 Lacs
Bengaluru
Work from Office
3+ years of data engineering experience, preferably with one or more of these technologies SQL, ANSI SQL, LookML, Data Modeling Language (DML) or BigQuery Understanding of how to structure and consume data into Looker to provide optimal performance. Telco Background Experience developing data solutions in the cloud using GCP, AWS, or Azure primary skills : SQL , Big Query , Looker
Posted 1 week ago
2.0 - 5.0 years
4 - 8 Lacs
Kolkata
Work from Office
This role involves the development and application of engineering practice and knowledge in defining, configuring and deploying industrial digital technologies (including but not limited to PLM and MES) for managing continuity of information across the engineering enterprise, including design, industrialization, manufacturing and supply chain, and for managing the manufacturing data. - Grade Specific Focus on Digital Continuity and Manufacturing. Develops competency in own area of expertise. Shares expertise and provides guidance and support to others. Interprets clients needs. Completes own role independently or with minimum supervision. Identifies problems and relevant issues in straight forward situations and generates solutions. Contributes in teamwork and interacts with customers.
Posted 1 week ago
5.0 - 8.0 years
7 - 10 Lacs
Ahmedabad
Work from Office
Key Responsibilities : Design, develop, and maintain data pipelines to support business intelligence and analytics. Implement ETL processes using SSIS (advanced level) to ensure efficient data transformation and movement. Develop and optimize data models for reporting and analytics. Work with Tableau (advanced level) to create insightful dashboards and visualizations. Write and execute complex SQL (advanced level) queries for data extraction, validation, and transformation. Collaborate with cross-functional teams in an Agile environment to deliver high-quality data solutions. Ensure data integrity, security, and compliance with best practices. Troubleshoot and optimize data workflows for performance improvement. Required Skills & Qualifications : 5+ years of experience as a Data Engineer. Advanced proficiency in SSIS, Tableau, and SQL. Strong understanding of ETL processes and data pipeline development. Experience with data modeling for analytical and reporting solutions. Hands-on experience working in Agile development environments. Excellent problem-solving and troubleshooting skills. Ability to work independently in a remote setup. Strong communication and collaboration skills.
Posted 1 week ago
3.0 - 5.0 years
27 - 32 Lacs
Bengaluru
Work from Office
Job Title : Data Engineer (DE) / SDE – Data Location Bangalore Experience range 3-15 What we offer Our mission is simple – Building trust. Our customer's trust in us is not merely about the safety of their assets but also about how dependable our digital offerings are. That’s why, we at Kotak Group are dedicated to transforming banking by imbibing a technology-first approach in everything we do, with an aim to enhance customer experience by providing superior banking services. We welcome and invite the best technological minds in the country to come join us in our mission to make banking seamless and swift. Here, we promise you meaningful work that positively impacts the lives of many. About our team DEX is a central data org for Kotak Bank which manages entire data experience of Kotak Bank. DEX stands for Kotak’s Data Exchange. This org comprises of Data Platform, Data Engineering and Data Governance charter. The org sits closely with Analytics org. DEX is primarily working on greenfield project to revamp entire data platform which is on premise solutions to scalable AWS cloud-based platform. The team is being built ground up which provides great opportunities to technology fellows to build things from scratch and build one of the best-in-class data lake house solutions. The primary skills this team should encompass are Software development skills preferably Python for platform building on AWS; Data engineering Spark (pyspark, sparksql, scala) for ETL development, Advanced SQL and Data modelling for Analytics. The org size is expected to be around 100+ member team primarily based out of Bangalore comprising of ~10 sub teams independently driving their charter. As a member of this team, you get opportunity to learn fintech space which is most sought-after domain in current world, be a early member in digital transformation journey of Kotak, learn and leverage technology to build complex data data platform solutions including, real time, micro batch, batch and analytics solutions in a programmatic way and also be futuristic to build systems which can be operated by machines using AI technologies. The data platform org is divided into 3 key verticals Data Platform This Vertical is responsible for building data platform which includes optimized storage for entire bank and building centralized data lake, managed compute and orchestrations framework including concepts of serverless data solutions, managing central data warehouse for extremely high concurrency use cases, building connectors for different sources, building customer feature repository, build cost optimization solutions like EMR optimizers, perform automations and build observability capabilities for Kotak’s data platform. The team will also be center for Data Engineering excellence driving trainings and knowledge sharing sessions with large data consumer base within Kotak. Data Engineering This team will own data pipelines for thousands of datasets, be skilled to source data from 100+ source systems and enable data consumptions for 30+ data analytics products. The team will learn and built data models in a config based and programmatic and think big to build one of the most leveraged data model for financial orgs. This team will also enable centralized reporting for Kotak Bank which cuts across multiple products and dimensions. Additionally, the data build by this team will be consumed by 20K + branch consumers, RMs, Branch Managers and all analytics usecases. Data Governance The team will be central data governance team for Kotak bank managing Metadata platforms, Data Privacy, Data Security, Data Stewardship and Data Quality platform. If you’ve right data skills and are ready for building data lake solutions from scratch for high concurrency systems involving multiple systems then this is the team for you. You day to day role will include Drive business decisions with technical input and lead the team. Design, implement, and support an data infrastructure from scratch. Manage AWS resources, including EC2, EMR, S3, Glue, Redshift, and MWAA. Extract, transform, and load data from various sources using SQL and AWS big data technologies. Explore and learn the latest AWS technologies to enhance capabilities and efficiency. Collaborate with data scientists and BI engineers to adopt best practices in reporting and analysis. Improve ongoing reporting and analysis processes, automating or simplifying self-service support for customers. Build data platforms, data pipelines, or data management and governance tools. BASIC QUALIFICATIONS for Data Engineer/ SDE in Data Bachelor's degree in Computer Science, Engineering, or a related field 3-5 years of experience in data engineering Strong understanding of AWS technologies, including S3, Redshift, Glue, and EMR Experience with data pipeline tools such as Airflow and Spark Experience with data modeling and data quality best practices Excellent problem-solving and analytical skills Strong communication and teamwork skills Experience in at least one modern scripting or programming language, such as Python, Java, or Scala Strong advanced SQL skills PREFERRED QUALIFICATIONS AWS cloud technologiesRedshift, S3, Glue, EMR, Kinesis, Firehose, Lambda, IAM, Airflow Prior experience in Indian Banking segment and/or Fintech is desired. Experience with Non-relational databases and data stores Building and operating highly available, distributed data processing systems for large datasets Professional software engineering and best practices for the full software development life cycle Designing, developing, and implementing different types of data warehousing layers Leading the design, implementation, and successful delivery of large-scale, critical, or complex data solutions Building scalable data infrastructure and understanding distributed systems concepts SQL, ETL, and data modelling Ensuring the accuracy and availability of data to customers Proficient in at least one scripting or programming language for handling large volume data processing Strong presentation and communications skills. For Managers, Customer centricity, obsession for customer Ability to manage stakeholders (product owners, business stakeholders, cross function teams) to coach agile ways of working. Ability to structure, organize teams, and streamline communication. Prior work experience to execute large scale Data Engineering projects
Posted 1 week ago
3.0 - 5.0 years
3 - 5 Lacs
Bengaluru
Work from Office
What we offer Our mission is simple – Building trust. Our customer's trust in us is not merely about the safety of their assets but also about how dependable our digital offerings are. That’s why, we at Kotak Group are dedicated to transforming banking by imbibing a technology-first approach in everything we do, with an aim to enhance customer experience by providing superior banking services. We welcome and invite the best technological minds in the country to come join us in our mission to make banking seamless and swift. Here, we promise you meaningful work that positively impacts the lives of many. About our team DEX is a central data org for Kotak Bank which manages entire data experience of Kotak Bank. DEX stands for Kotak’s Data Exchange. This org comprises of Data Platform, Data Engineering and Data Governance charter. The org sits closely with Analytics org. DEX is primarily working on greenfield project to revamp entire data platform which is on premise solutions to scalable AWS cloud-based platform. The team is being built ground up which provides great opportunities to technology fellows to build things from scratch and build one of the best-in-class data lake house solutions. The primary skills this team should encompass are Software development skills preferably Python for platform building on AWS; Data engineering Spark (pyspark, sparksql, scala) for ETL development, Advanced SQL and Data modelling for Analytics. The org size is expected to be around 100+ member team primarily based out of Bangalore comprising of ~10 sub teams independently driving their charter. As a member of this team, you get opportunity to learn fintech space which is most sought-after domain in current world, be a early member in digital transformation journey of Kotak, learn and leverage technology to build complex data data platform solutions including, real time, micro batch, batch and analytics solutions in a programmatic way and also be futuristic to build systems which can be operated by machines using AI technologies. The data platform org is divided into 3 key verticals Data Platform This Vertical is responsible for building data platform which includes optimized storage for entire bank and building centralized data lake, managed compute and orchestrations framework including concepts of serverless data solutions, managing central data warehouse for extremely high concurrency use cases, building connectors for different sources, building customer feature repository, build cost optimization solutions like EMR optimizers, perform automations and build observability capabilities for Kotak’s data platform. The team will also be center for Data Engineering excellence driving trainings and knowledge sharing sessions with large data consumer base within Kotak. Data Engineering This team will own data pipelines for thousands of datasets, be skilled to source data from 100+ source systems and enable data consumptions for 30+ data analytics products. The team will learn and built data models in a config based and programmatic and think big to build one of the most leveraged data model for financial orgs. This team will also enable centralized reporting for Kotak Bank which cuts across multiple products and dimensions. Additionally, the data build by this team will be consumed by 20K + branch consumers, RMs, Branch Managers and all analytics usecases. Data Governance The team will be central data governance team for Kotak bank managing Metadata platforms, Data Privacy, Data Security, Data Stewardship and Data Quality platform. If you’ve right data skills and are ready for building data lake solutions from scratch for high concurrency systems involving multiple systems then this is the team for you. You day to day role will include Drive business decisions with technical input and lead the team. Design, implement, and support an data infrastructure from scratch. Manage AWS resources, including EC2, EMR, S3, Glue, Redshift, and MWAA. Extract, transform, and load data from various sources using SQL and AWS big data technologies. Explore and learn the latest AWS technologies to enhance capabilities and efficiency. Collaborate with data scientists and BI engineers to adopt best practices in reporting and analysis. Improve ongoing reporting and analysis processes, automating or simplifying self-service support for customers. Build data platforms, data pipelines, or data management and governance tools. BASIC QUALIFICATIONS for Data Engineer/ SDE in Data Bachelor's degree in Computer Science, Engineering, or a related field 3-5 years of experience in data engineering Strong understanding of AWS technologies, including S3, Redshift, Glue, and EMR Experience with data pipeline tools such as Airflow and Spark Experience with data modeling and data quality best practices Excellent problem-solving and analytical skills Strong communication and teamwork skills Experience in at least one modern scripting or programming language, such as Python, Java, or Scala Strong advanced SQL skills BASIC QUALIFICATIONS for Data Engineering Manager / Software Development Manager 10+ years of engineering experience most of which is in Data domain 5+ years of engineering team management experience 10+ years of planning, designing, developing and delivering consumer software experience - Experience partnering with product or program management teams 5+ years of experience in managing data engineer, business intelligence engineers and/or data scientists Experience designing or architecting (design patterns, reliability and scaling) of new and existing systems Experience managing multiple concurrent programs, projects and development teams in an Agile environment Strong understanding of Data Platform, Data Engineering and Data Governance Experience partnering with product and program management teams - Experience designing and developing large scale, high-traffic applications PREFERRED QUALIFICATIONS AWS cloud technologiesRedshift, S3, Glue, EMR, Kinesis, Firehose, Lambda, IAM, Airflow Prior experience in Indian Banking segment and/or Fintech is desired. Experience with Non-relational databases and data stores Building and operating highly available, distributed data processing systems for large datasets Professional software engineering and best practices for the full software development life cycle Designing, developing, and implementing different types of data warehousing layers Leading the design, implementation, and successful delivery of large-scale, critical, or complex data solutions Building scalable data infrastructure and understanding distributed systems concepts SQL, ETL, and data modelling Ensuring the accuracy and availability of data to customers Proficient in at least one scripting or programming language for handling large volume data processing Strong presentation and communications skills.
Posted 1 week ago
3.0 - 5.0 years
30 - 35 Lacs
Bengaluru
Work from Office
What we offer Our mission is simple – Building trust. Our customer's trust in us is not merely about the safety of their assets but also about how dependable our digital offerings are. That’s why, we at Kotak Group are dedicated to transforming banking by imbibing a technology-first approach in everything we do, with an aim to enhance customer experience by providing superior banking services. We welcome and invite the best technological minds in the country to come join us in our mission to make banking seamless and swift. Here, we promise you meaningful work that positively impacts the lives of many. About our team DEX is a central data org for Kotak Bank which manages entire data experience of Kotak Bank. DEX stands for Kotak’s Data Exchange. This org comprises of Data Platform, Data Engineering and Data Governance charter. The org sits closely with Analytics org. DEX is primarily working on greenfield project to revamp entire data platform which is on premise solutions to scalable AWS cloud-based platform. The team is being built ground up which provides great opportunities to technology fellows to build things from scratch and build one of the best-in-class data lake house solutions. The primary skills this team should encompass are Software development skills preferably Python for platform building on AWS; Data engineering Spark (pyspark, sparksql, scala) for ETL development, Advanced SQL and Data modelling for Analytics. The org size is expected to be around 100+ member team primarily based out of Bangalore comprising of ~10 sub teams independently driving their charter. As a member of this team, you get opportunity to learn fintech space which is most sought-after domain in current world, be a early member in digital transformation journey of Kotak, learn and leverage technology to build complex data data platform solutions including, real time, micro batch, batch and analytics solutions in a programmatic way and also be futuristic to build systems which can be operated by machines using AI technologies. The data platform org is divided into 3 key verticals Data Platform This Vertical is responsible for building data platform which includes optimized storage for entire bank and building centralized data lake, managed compute and orchestrations framework including concepts of serverless data solutions, managing central data warehouse for extremely high concurrency use cases, building connectors for different sources, building customer feature repository, build cost optimization solutions like EMR optimizers, perform automations and build observability capabilities for Kotak’s data platform. The team will also be center for Data Engineering excellence driving trainings and knowledge sharing sessions with large data consumer base within Kotak. Data Engineering This team will own data pipelines for thousands of datasets, be skilled to source data from 100+ source systems and enable data consumptions for 30+ data analytics products. The team will learn and built data models in a config based and programmatic and think big to build one of the most leveraged data model for financial orgs. This team will also enable centralized reporting for Kotak Bank which cuts across multiple products and dimensions. Additionally, the data build by this team will be consumed by 20K + branch consumers, RMs, Branch Managers and all analytics usecases. Data Governance The team will be central data governance team for Kotak bank managing Metadata platforms, Data Privacy, Data Security, Data Stewardship and Data Quality platform. If you’ve right data skills and are ready for building data lake solutions from scratch for high concurrency systems involving multiple systems then this is the team for you. You day to day role will include Drive business decisions with technical input and lead the team. Design, implement, and support an data infrastructure from scratch. Manage AWS resources, including EC2, EMR, S3, Glue, Redshift, and MWAA. Extract, transform, and load data from various sources using SQL and AWS big data technologies. Explore and learn the latest AWS technologies to enhance capabilities and efficiency. Collaborate with data scientists and BI engineers to adopt best practices in reporting and analysis. Improve ongoing reporting and analysis processes, automating or simplifying self-service support for customers. Build data platforms, data pipelines, or data management and governance tools. BASIC QUALIFICATIONS for Data Engineer/ SDE in Data Bachelor's degree in Computer Science, Engineering, or a related field 3-5 years of experience in data engineering Strong understanding of AWS technologies, including S3, Redshift, Glue, and EMR Experience with data pipeline tools such as Airflow and Spark Experience with data modeling and data quality best practices Excellent problem-solving and analytical skills Strong communication and teamwork skills Experience in at least one modern scripting or programming language, such as Python, Java, or Scala Strong advanced SQL skills BASIC QUALIFICATIONS for Data Engineering Manager / Software Development Manager 10+ years of engineering experience most of which is in Data domain 5+ years of engineering team management experience 10+ years of planning, designing, developing and delivering consumer software experience - Experience partnering with product or program management teams 5+ years of experience in managing data engineer, business intelligence engineers and/or data scientists Experience designing or architecting (design patterns, reliability and scaling) of new and existing systems Experience managing multiple concurrent programs, projects and development teams in an Agile environment Strong understanding of Data Platform, Data Engineering and Data Governance Experience partnering with product and program management teams - Experience designing and developing large scale, high-traffic applications PREFERRED QUALIFICATIONS AWS cloud technologiesRedshift, S3, Glue, EMR, Kinesis, Firehose, Lambda, IAM, Airflow Prior experience in Indian Banking segment and/or Fintech is desired. Experience with Non-relational databases and data stores Building and operating highly available, distributed data processing systems for large datasets Professional software engineering and best practices for the full software development life cycle Designing, developing, and implementing different types of data warehousing layers Leading the design, implementation, and successful delivery of large-scale, critical, or complex data solutions Building scalable data infrastructure and understanding distributed systems concepts SQL, ETL, and data modelling Ensuring the accuracy and availability of data to customers Proficient in at least one scripting or programming language for handling large volume data processing Strong presentation and communications skills.
Posted 1 week ago
3.0 - 5.0 years
4 - 8 Lacs
Bengaluru
Work from Office
What we offer Our mission is simple – Building trust. Our customer's trust in us is not merely about the safety of their assets but also about how dependable our digital offerings are. That’s why, we at Kotak Group are dedicated to transforming banking by imbibing a technology-first approach in everything we do, with an aim to enhance customer experience by providing superior banking services. We welcome and invite the best technological minds in the country to come join us in our mission to make banking seamless and swift. Here, we promise you meaningful work that positively impacts the lives of many. About our team DEX is a central data org for Kotak Bank which manages entire data experience of Kotak Bank. DEX stands for Kotak’s Data Exchange. This org comprises of Data Platform, Data Engineering and Data Governance charter. The org sits closely with Analytics org. DEX is primarily working on greenfield project to revamp entire data platform which is on premise solutions to scalable AWS cloud-based platform. The team is being built ground up which provides great opportunities to technology fellows to build things from scratch and build one of the best-in-class data lake house solutions. The primary skills this team should encompass are Software development skills preferably Python for platform building on AWS; Data engineering Spark (pyspark, sparksql, scala) for ETL development, Advanced SQL and Data modelling for Analytics. The org size is expected to be around 100+ member team primarily based out of Bangalore comprising of ~10 sub teams independently driving their charter. As a member of this team, you get opportunity to learn fintech space which is most sought-after domain in current world, be a early member in digital transformation journey of Kotak, learn and leverage technology to build complex data data platform solutions including, real time, micro batch, batch and analytics solutions in a programmatic way and also be futuristic to build systems which can be operated by machines using AI technologies. The data platform org is divided into 3 key verticals Data Platform This Vertical is responsible for building data platform which includes optimized storage for entire bank and building centralized data lake, managed compute and orchestrations framework including concepts of serverless data solutions, managing central data warehouse for extremely high concurrency use cases, building connectors for different sources, building customer feature repository, build cost optimization solutions like EMR optimizers, perform automations and build observability capabilities for Kotak’s data platform. The team will also be center for Data Engineering excellence driving trainings and knowledge sharing sessions with large data consumer base within Kotak. Data Engineering This team will own data pipelines for thousands of datasets, be skilled to source data from 100+ source systems and enable data consumptions for 30+ data analytics products. The team will learn and built data models in a config based and programmatic and think big to build one of the most leveraged data model for financial orgs. This team will also enable centralized reporting for Kotak Bank which cuts across multiple products and dimensions. Additionally, the data build by this team will be consumed by 20K + branch consumers, RMs, Branch Managers and all analytics usecases. Data Governance The team will be central data governance team for Kotak bank managing Metadata platforms, Data Privacy, Data Security, Data Stewardship and Data Quality platform. If you’ve right data skills and are ready for building data lake solutions from scratch for high concurrency systems involving multiple systems then this is the team for you. You day to day role will include Drive business decisions with technical input and lead the team. Design, implement, and support an data infrastructure from scratch. Manage AWS resources, including EC2, EMR, S3, Glue, Redshift, and MWAA. Extract, transform, and load data from various sources using SQL and AWS big data technologies. Explore and learn the latest AWS technologies to enhance capabilities and efficiency. Collaborate with data scientists and BI engineers to adopt best practices in reporting and analysis. Improve ongoing reporting and analysis processes, automating or simplifying self-service support for customers. Build data platforms, data pipelines, or data management and governance tools. BASIC QUALIFICATIONS for Data Engineer/ SDE in Data Bachelor's degree in Computer Science, Engineering, or a related field 3-5 years of experience in data engineering Strong understanding of AWS technologies, including S3, Redshift, Glue, and EMR Experience with data pipeline tools such as Airflow and Spark Experience with data modeling and data quality best practices Excellent problem-solving and analytical skills Strong communication and teamwork skills Experience in at least one modern scripting or programming language, such as Python, Java, or Scala Strong advanced SQL skills BASIC QUALIFICATIONS for Data Engineering Manager / Software Development Manager 10+ years of engineering experience most of which is in Data domain 5+ years of engineering team management experience 10+ years of planning, designing, developing and delivering consumer software experience - Experience partnering with product or program management teams 5+ years of experience in managing data engineer, business intelligence engineers and/or data scientists Experience designing or architecting (design patterns, reliability and scaling) of new and existing systems Experience managing multiple concurrent programs, projects and development teams in an Agile environment Strong understanding of Data Platform, Data Engineering and Data Governance Experience partnering with product and program management teams - Experience designing and developing large scale, high-traffic applications PREFERRED QUALIFICATIONS AWS cloud technologiesRedshift, S3, Glue, EMR, Kinesis, Firehose, Lambda, IAM, Airflow Prior experience in Indian Banking segment and/or Fintech is desired. Experience with Non-relational databases and data stores Building and operating highly available, distributed data processing systems for large datasets Professional software engineering and best practices for the full software development life cycle Designing, developing, and implementing different types of data warehousing layers Leading the design, implementation, and successful delivery of large-scale, critical, or complex data solutions Building scalable data infrastructure and understanding distributed systems concepts SQL, ETL, and data modelling Ensuring the accuracy and availability of data to customers Proficient in at least one scripting or programming language for handling large volume data processing Strong presentation and communications skills.
Posted 1 week ago
2.0 - 4.0 years
10 - 18 Lacs
Bengaluru
Work from Office
Role & responsibilities : Design and Build Data Infrastructure : Develop scalable data pipelines and data lake/warehouse solutions for real-time and batch data using cloud and open-source tools. Develop & Automate Data Workflows : Create Python-based ETL/ELT processes for data ingestion, validation, integration, and transformation across multiple sources. Ensure Data Quality & Governance : Implement monitoring systems, resolve data quality issues, and enforce data governance and security best practices. Collaborate & Mentor : Work with cross-functional teams to deliver data solutions, and mentor junior engineers as the team grows. Explore New Tech : Research and implement emerging tools and technologies to improve system performance and scalability.
Posted 2 weeks ago
4.0 - 8.0 years
12 - 22 Lacs
Nagpur, Chennai, Bengaluru
Work from Office
Build ETL pipelines using FME to ingest and transform data from Idox/CCF systems. Create Custom Transformers in FME Manage and query spatial datasets using PostgreSQL/PostGIS. Handle spatial formats like GeoPackage, GML, GeoJSON, Shapefiles. Required Candidate profile Strong hands-on exp with FME workflows, spatial data transformation. Tools: CategoryToolsETLFME (Safe Software), Talend (optional), PythonSpatial DBPostGIS, Oracle SpatialGIS Perks and benefits As per company standards
Posted 2 weeks ago
4.0 - 9.0 years
17 - 22 Lacs
Gurugram
Work from Office
Job Title - S&C Global Network - AI - Healthcare Analytics - Consultant Management Level: 9-Team Lead/Consultant Location: Bangalore/Gurgaon Must-have skills: R,Phython,SQL,Spark,Tableau ,Power BI Good to have skills: Ability to leverage design thinking, business process optimization, and stakeholder management skills. Job Summary : This role involves driving strategic initiatives, managing business transformations, and leveraging industry expertise to create value-driven solutions. Roles & Responsibilities: Provide strategic advisory services, conduct market research, and develop data-driven recommendations to enhance business performance. WHATS IN IT FOR YOU An opportunity to work on high-visibility projects with top Pharma clients around the globe. Potential to Co-create with leaders in strategy, industry experts, enterprise function practitioners, and business intelligence professionals to shape and recommend innovative solutions that leverage emerging technologies. Ability to embed responsible business into everythingfrom how you service your clients to how you operate as a responsible professional. Personalized training modules to develop your strategy & consulting acumen to grow your skills, industry knowledge, and capabilities. Opportunity to thrive in a culture that is committed to accelerating equality for all. Engage in boundaryless collaboration across the entire organization. What you would do in this role Support delivery of small to medium-sized teams to deliver consulting projects for global clients. Responsibilities may include strategy, implementation, process design, and change management for specific modules. Work with the team or as an Individual contributor on the project assigned which includes a variety of skills to be utilized from Data Engineering to Data Science Provide Subject matter expertise in various sub-segments of the LS industry. Develop assets and methodologies, point-of-view, research, or white papers for use by the team and the larger community. Acquire new skills that have utility across industry groups. Support strategies and operating models focused on some business units and assess likely competitive responses. Also, assess implementation readiness and points of greatest impact. Co-lead proposals, and business development efforts and coordinate with other colleagues to create consensus-driven deliverables. Execute a transformational change plan aligned with the clients business strategy and context for change. Engage stakeholders in the change journey and build commitment to change. Make presentations wherever required to a known audience or client on functional aspects of his or her domain. Who are we looking for Bachelors or Masters degree in Statistics, Data Science, Applied Mathematics, Business Analytics, Computer Science, Information Systems, or other Quantitative field. Proven experience (4+ years) in working on Life Sciences/Pharma/Healthcare projects and delivering successful outcomes. Excellent understanding of Pharma data sets commercial, clinical, RWE (Real World Evidence) & EMR (Electronic medical records) Leverage ones hands on experience of working across one or more of these areas such as real-world evidence data, R&D clinical data, digital marketing data. Hands-on experience with handling Datasets like Komodo, RAVE, IQVIA, Truven, Optum etc. Hands-on experience in building and deployment of Statistical Models/Machine Learning including Segmentation & predictive modeling, hypothesis testing, multivariate statistical analysis, time series techniques, and optimization. Proficiency in Programming languages such as R, Python, SQL, Spark, etc. Ability to work with large data sets and present findings/insights to key stakeholders; Data management using databases like SQL. Experience with any of the cloud platforms like AWS, Azure, or Google Cloud for deploying and scaling language models. Experience with any of the Data Visualization tools like Tableau, Power BI, Qlikview, Spotfire is good to have. Excellent analytical and problem-solving skills, with a data-driven mindset. Proficient in Excel, MS Word, PowerPoint, etc. Ability to solve complex business problems and deliver client delight. Strong writing skills to build points of view on current industry trends. Good communication, interpersonal, and presentation skills Professional & Technical Skills: - Relevant experience in the required domain. - Strong analytical, problem-solving, and communication skills. - Ability to work in a fast-paced, dynamic environment. Additional Information: - Opportunity to work on innovative projects. - Career growth and leadership exposure. About Our Company | Accenture Qualification Experience: 4-8 Years Educational Qualification: Bachelors or Masters degree in Statistics, Data Science, Applied Mathematics, Business Analytics, Computer Science, Information Systems, or other Quantitative field.
Posted 2 weeks ago
3.0 - 8.0 years
4 - 8 Lacs
Coimbatore
Work from Office
Project Role : Data Engineer Project Role Description : Design, develop and maintain data solutions for data generation, collection, and processing. Create data pipelines, ensure data quality, and implement ETL (extract, transform and load) processes to migrate and deploy data across systems. Must have skills : Google BigQuery Good to have skills : Microsoft SQL Server, Google Cloud Data ServicesMinimum 3 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As a Data Engineer, you will be responsible for designing, developing, and maintaining data solutions for data generation, collection, and processing. You will create data pipelines, ensure data quality, and implement ETL processes to migrate and deploy data across systems. Roles & Responsibilities:- Expected to perform independently and become an SME.- Required active participation/contribution in team discussions.- Contribute in providing solutions to work related problems.- Develop and maintain data pipelines.- Ensure data quality throughout the data lifecycle.- Implement ETL processes for data migration and deployment.- Collaborate with cross-functional teams to understand data requirements.- Optimize data storage and retrieval processes. Professional & Technical Skills: - Must To Have Skills: Proficiency in Google BigQuery.- Strong understanding of data engineering principles.- Experience with cloud-based data services.- Knowledge of SQL and database management systems.- Hands-on experience with data modeling and schema design. Additional Information:- The candidate should have a minimum of 3 years of experience in Google BigQuery.- This position is based at our Mumbai office.- A 15 years full time education is required. Qualification 15 years full time education
Posted 2 weeks ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
24189 Jobs | Dublin
Wipro
12931 Jobs | Bengaluru
EY
9307 Jobs | London
Accenture in India
8065 Jobs | Dublin 2
Amazon
7645 Jobs | Seattle,WA
Uplers
7501 Jobs | Ahmedabad
IBM
7123 Jobs | Armonk
Oracle
6823 Jobs | Redwood City
Muthoot FinCorp (MFL)
6162 Jobs | New Delhi
Capgemini
5226 Jobs | Paris,France