Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
5.0 - 10.0 years
20 - 35 Lacs
Bengaluru
Work from Office
Senior Data Engineer Our Mission SPAN is enabling electrification for all We are a mission-driven company designing, building, and deploying products that electrify the built environment, reduce carbon emissions, and slow the effects of climate change. Decarbonization is the process to reduce or remove greenhouse gas emissions, especially carbon dioxide, from entering our atmosphere. Electrification is the process of replacing fossil fuel appliances that run on gas or oil with all-electric upgrades for a cleaner way to power our lives. At SPAN, we believe in: Enabling homes and vehicles powered by clean energy Making electrification upgrades possible Building more resilient homes with reliable backup Designing a flexible and distributed electrical grid The Role As a Data Engineer you would be working to design, build, test and create infrastructure necessary for real time analytics and batch analytics pipelines. You will work with multiple teams within the org to provide analysis, insights on the data. You will also be involved in writing ETL processes that support data ingestion. You will also guide and enforce best practices for data management, governance and security. You will build infrastructure to monitor these data pipelines / ETL jobs / tasks and create tooling/infrastructure for providing visibility into these. Responsibilities We are looking for a Data Engineer with passion for building data pipelines, working with product, data science and business intelligence teams and delivering great solutions. As a part of the team you:- Acquire deep business understanding on how SPAN data flows from IoT device to cloud through the system and build scalable and optimized data solutions that impact many stakeholders. Be an advocate for data quality and excellence of our platform. Build tools that help streamline the management and operation of our data ecosystem. Ensure best practices and standards in our data ecosystem are shared across teams. Work with teams within the company to build close relationships with our partners to understand the value our platform can bring and how we can make it better. Improve data discovery by creating data exploration processes and promoting adoption of data sources across the company. Have a desire to write tools and applications to automate work rather than do everything by hand. Assist internal teams in building out data logging, alerting and monitoring for their applications Are passionate about CI/CD process. Design, develop and establish KPIs to monitor analysis and provide strategic insights to drive growth and performance. About You Required Qualifications Bachelor's Degree in a quantitative discipline: computer science, statistics, operations research, informatics, engineering, applied mathematics, economics, etc. 5+ years of relevant work experience in data engineering, business intelligence, research or related fields. Expert level production-grade, programming experience in at least one of these languages (Python, Kotlin, or other JVM based languages) Experience in writing clean, concise and well structured code in one of the above languages. Experience working with Infrastructure-as-code tools: Pulumi, Terraform, etc. Experience working with CI/CD systems: Circle-CI, Github Actions, Argo-CD, etc. Experience managing data engineering infrastructure through Docker and Kubernetes Experience working with latency data processing solutions like Flink, Prefect, AWS Kinesis, Kafka, Spark Stream processing etc. Experience with SQL/Relational databases, OLAP databases like Snowflake. Experience working in AWS: S3, Glue, Athena, MSK, EMR, ECR etc. Bonus Qualifications Experience with the Energy industry Experience with building IoT and/or hardware products Understanding of electrical systems and residential loads Experience with data visualization using Tableau. Experience in Data loading tools like FiveTran as well as data debugging tools such as DataDog Life at SPAN Our Bengaluru team plays a pivotal role in SPANs continued growth and expansion. Together, were driving engineering , product development , and operational excellence to shape the future of home energy solutions. As part of our team in India, youll have the opportunity to collaborate closely with our teams in the US and across the globe. This international collaboration fosters innovation, learning, and growth, while helping us achieve our bold mission of electrifying homes and advancing clean energy solutions worldwide. Our in-office culture offers the chance for dynamic interactions and hands-on teamwork, making SPAN a truly collaborative environment where every team members contribution matters. Our climate-focused culture is driven by a team of forward-thinkers, engineers, and problem-solvers who push boundaries every day. Do mission-driven work: Every role at SPAN directly advances clean energy adoption. Bring powerful ideas to life: We encourage diverse ideas and perspectives to drive stronger products. Nurture an innovation-first mindset: We encourage big thinking and bold action. Deliver exceptional customer value: We value hard work, and the ability to deliver exceptional customer value. Benefits at SPAN India Generous paid leave Comprehensive Insurance & Health Benefits Centrally located office in Bengaluru with easy access to public transit, dining, and city amenities Interested in joining our team? Apply today and well be in touch with the next steps!
Posted 2 weeks ago
10.0 - 15.0 years
35 - 50 Lacs
Hyderabad, Bengaluru
Work from Office
Job Title: Senior Kafka Engineer Location: Hyderabad / Bangalore Work Mode: Work from Office | 24/7 Rotational Shifts Type: Full-Time Experience: 8+ Years About the Role: Were hiring a Senior Kafka Engineer to manage and enhance our Kafka infrastructure on AWS and Confluent Platform. You’ll lead efforts in building secure, scalable, and reliable data streaming solutions for high-impact FinTech systems. Key Responsibilities: Manage and optimize Kafka and Confluent deployments on AWS Design and maintain Kafka producers, consumers, streams, and connectors Define schema, partitioning, and retention policies Monitor performance using Prometheus, Grafana, and Confluent tools Automate infrastructure using Terraform, Helm, and Kubernetes (EKS) Ensure high availability, security, and disaster recovery Collaborate with teams and share Kafka best practices Required Skills: 8+ years in platform engineering, 5+ with Kafka & Confluent Strong Java or Python Kafka client development Hands-on with Schema Registry, Control Center, ksqlDB Kafka deployment on AWS (MSK or EC2) Kafka Connect, Streams, and schema tools Kubernetes (EKS), Terraform, Prometheus, Grafana Nice to Have: FinTech or regulated industry experience Knowledge of TLS, SASL/OAuth, RBAC Experience with Flink or Spark Streaming Kafka governance and multi-tenancy
Posted 2 weeks ago
8.0 - 13.0 years
13 - 17 Lacs
Bengaluru
Work from Office
We are currently seeking a Cloud Solution Delivery Lead Consultant to join our team in bangalore, Karntaka (IN-KA), India (IN). Data Engineer Lead Robust hands-on experience with industry standard tooling and techniques, including SQL, Git and CI/CD pipelinesmandiroty Management, administration, and maintenance with data streaming tools such as Kafka/Confluent Kafka, Flink Experienced with software support for applications written in Python & SQL Administration, configuration and maintenance of Snowflake & DBT Experience with data product environments that use tools such as Kafka Connect, Synk, Confluent Schema Registry, Atlan, IBM MQ, Sonarcube, Apache Airflow, Apache Iceberg, Dynamo DB, Terraform and GitHub Debugging issues, root cause analysis, and applying fixes Management and maintenance of ETL processes (bug fixing and batch job monitoring)Training & Certification "¢ Apache Kafka Administration Snowflake Fundamentals/Advanced Training "¢ Experience 8 years of experience in a technical role working with AWSAt least 2 years in a leadership or management role
Posted 2 weeks ago
7.0 - 12.0 years
13 - 18 Lacs
Bengaluru
Work from Office
We are currently seeking a Lead Data Architect to join our team in Bangalore, Karntaka (IN-KA), India (IN). Position Overview We are seeking a highly skilled and experienced Data Architect to join our dynamic team. The ideal candidate will have a strong background in designing and implementing data solutions using AWS infrastructure and a variety of core and supplementary technologies. This role requires a deep understanding of data architecture, cloud services, and the ability to drive innovative solutions to meet business needs. Key Responsibilities - Architect end-to-end data solutions using AWS services, including Lambda, SNS, S3, and EKS, Kafka and Confluent, all within a larger and overarching programme ecosystem - Architect data processing applications using Python, Kafka, Confluent Cloud and AWS - Ensure data security and compliance throughout the architecture - Collaborate with cross-functional teams to understand business requirements and translate them into technical solutions - Optimize data flows for performance, cost-efficiency, and scalability - Implement data governance and quality control measures - Ensure delivery of CI, CD and IaC for NTT tooling, and as templates for downstream teams - Provide technical leadership and mentorship to development teams and lead engineers - Stay current with emerging technologies and industry trends Required Skills and Qualifications - Bachelor's degree in Computer Science, Engineering, or related field - 7+ years of experience in data architecture and engineering - Strong expertise in AWS cloud services, particularly Lambda, SNS, S3, and EKS - Strong experience with Confluent - Strong experience in Kafka - Solid understanding of data streaming architectures and best practices - Strong problem-solving skills and ability to think critically - Excellent communication skills to convey complex technical concepts to both technical and non-technical stakeholders - Knowledge of Apache Airflow for data orchestration Preferred Qualifications - An understanding of cloud networking patterns and practises - Experience with working on a library or other long term product - Knowledge of the Flink ecosystem - Experience with Terraform - Deep experience with CI/CD pipelines - Strong understanding of the JVM language family - Understanding of GDPR and the correct handling of PII - Expertise with technical interface design - Use of Docker Responsibilities - Design and implement scalable data architectures using AWS services, Confluent and Kafka - Develop data ingestion, processing, and storage solutions using Python and AWS Lambda, Confluent and Kafka - Ensure data security and implement best practices using tools like Synk - Optimize data pipelines for performance and cost-efficiency - Collaborate with data scientists and analysts to enable efficient data access and analysis - Implement data governance policies and procedures - Provide technical guidance and mentorship to junior team members - Evaluate and recommend new technologies to improve data architecture
Posted 2 weeks ago
8.0 - 12.0 years
4 - 8 Lacs
Pune
Work from Office
Roles & Responsibilities: Total 8-10 years of working experience Experience/Needs 8-10 Years of experience with big data tools like Spark, Kafka, Hadoop etc. Design and deliver consumer-centric high performant systems. You would be dealing with huge volumes of data sets arriving through batch and streaming platforms. You will be responsible to build and deliver data pipelines that process, transform, integrate and enrich data to meet various demands from business Mentor team on infrastructural, networking, data migration, monitoring and troubleshooting aspects Focus on automation using Infrastructure as a Code (IaaC), Jenkins, devOps etc. Design, build, test and deploy streaming pipelines for data processing in real time and at scale Experience with stream-processing systems like Storm, Spark-Streaming, Flink etc.. Experience with object-oriented/object function scripting languagesScala, Java, etc. Develop software systems using test driven development employing CI/CD practices Partner with other engineers and team members to develop software that meets business needs Follow Agile methodology for software development and technical documentation Good to have banking/finance domain knowledge Strong written and oral communication, presentation and interpersonal skills. Exceptional analytical, conceptual, and problem-solving abilities Able to prioritize and execute tasks in a high-pressure environment Experience working in a team-oriented, collaborative environment 8-10 years of hand on coding experience Proficient in Java, with a good knowledge of its ecosystems Experience with writing Spark code using scala language Experience with BigData tools like Sqoop, Hive, Pig, Hue Solid understanding of object-oriented programming and HDFS concepts Familiar with various design and architectural patterns Experience with big data toolsHadoop, Spark, Kafka, fink, Hive, Sqoop etc. Experience with relational SQL and NoSQL databases like MySQL, PostgreSQL, Mongo dB and Cassandra Experience with data pipeline tools like Airflow, etc. Experience with AWS cloud servicesEC2, S3, EMR, RDS, Redshift, BigQuery Experience with stream-processing systemsStorm, Spark-Streaming, Flink etc. Experience with object-oriented/object function scripting languagesPython, Java, Scala, etc. Expertise in design / developing platform components like caching, messaging, event processing, automation, transformation and tooling frameworks Location:Pune/ Mumbai/ Bangalore/ Chennai
Posted 2 weeks ago
5.0 - 8.0 years
10 - 14 Lacs
Bengaluru
Work from Office
BS or higher degree in Computer Science (or equivalent field) 3-6+ years of programming experience with Java and Python Strong in writing SQL queries and understanding of Kafka, Scala, Spark/Flink Exposure to AWS Lambda, AWS Cloud Watch, Step Functions, EC2, Cloud Formation, Jenkins
Posted 2 weeks ago
8.0 - 13.0 years
9 - 14 Lacs
Bengaluru
Work from Office
8+ years experience combined between backend and data platform engineering roles Worked on large scale distributed systems. 5+ years of experience building data platform with (one of) Apache Spark, Flink or with similar frameworks. 7+ years of experience programming with Java Experience building large scale data/event pipelines Experience with relational SQL and NoSQL databases, including Postgres/MySQL, Cassandra, MongoDB Demonstrated experience with EKS, EMR, S3, IAM, KDA, Athena, Lambda, Networking, elastic cache and other AWS services.
Posted 2 weeks ago
1.0 - 3.0 years
4 - 6 Lacs
Bengaluru
Work from Office
Java & OOP: 13 years experience; strong grasp of core Java (collections, concurrency, GC, memory model) and design patterns Databases: hands-on with PostgreSQL, MySQL, and MongoDB; schema design, indexing, query tuning APIs & Frameworks: Spring Boot, Spring Data, or equivalent Collaboration: agile practices (Scrum/Kanban), clear communication and documentation Additional Skills Streaming: practical experience with Apache Kafka (producers/consumers, topics, partitions) and Apache Flink (stateful stream processing, windowing, watermarks)
Posted 2 weeks ago
10.0 - 20.0 years
35 - 60 Lacs
Mumbai, India
Work from Office
Design Full Stack solutions with cloud infrastructure (IAAS, PAAS, SAAS, on Premise, Hybrid Cloud) Support Application and infrastructure design and build as a subject matter expert Implement proof of concepts to demonstrate value of the solution designed Provide consulting support to ensure delivery teams build scalable, extensible, high availability, low latency, and highly usable applications Ensure solutions are aligned with requirements from all stake holders such as Consumers, Business, IT, Security and Compliance Ensure that all Enterprise IT parameters and constraints are considered as part of the design Design an appropriate technical solution to meet business requirements that may involve Hybrid cloud environments including Cloud-native architecture, Microservices, etc. Working knowledge of high availability, low latency end-to-end technology stack is especially important using both physical and virtual load balancing, caching, and scaling technology Awareness of Full stack web development frameworks such as Angular / React / Vue Awareness of relational and no relational / NoSql databases such as MongoDB / MS SQL / Cassandra / Neo4J / DynamoDB Awareness of Data Streaming platforms such as Apache Kafka / Apache Flink / AWS Kinesis Working experience of using AWS Step Functions or Azure Logic Apps with serverless Lambda or Azure Functions Optimizes and incorporates the inputs of specialists in solution design. Establishes the validity of a solution and its components with both short- term and long-term implications. Identifies the scalability options and implications on IT strategy and/or related implications of a solution and includes these in design activities and planning. Build strong professional relationships with key IT and business executives. Be a trusted advisor for Cross functional and Management Teams. Partners effectively with other teams to ensure problem resolution. Provide solutions and advice, create Architectures, PPT. Documents and effectively transfer knowledge to internal and external stakeholders Demonstrates knowledge of public cloud technology & solutions. Applies broad understanding of technical innovations & trends in solving business problems. Manage special projects and strategic initiatives as assigned by management. Implement and assist in developing policies for Information Security, and Environmental compliance, ensuring the highest standards are maintained. Ensure adherence to SLAs with internal and external customers and compliance with Information Security Policies, including risk assessments and procedure reviews.
Posted 2 weeks ago
8.0 - 12.0 years
4 - 8 Lacs
Pune
Work from Office
Job Information Job Opening ID ZR_1581_JOB Date Opened 25/11/2022 Industry Technology Job Type Work Experience 8-12 years Job Title Senior Specialist- Data Engineer City Pune Province Maharashtra Country India Postal Code 411001 Number of Positions 4 Location:Pune/ Mumbai/ Bangalore/ Chennai Roles & Responsibilities: Total 8-10 years of working experience Experience/Needs 8-10 Years of experience with big data tools like Spark, Kafka, Hadoop etc. Design and deliver consumer-centric high performant systems. You would be dealing with huge volumes of data sets arriving through batch and streaming platforms. You will be responsible to build and deliver data pipelines that process, transform, integrate and enrich data to meet various demands from business Mentor team on infrastructural, networking, data migration, monitoring and troubleshooting aspects Focus on automation using Infrastructure as a Code (IaaC), Jenkins, devOps etc. Design, build, test and deploy streaming pipelines for data processing in real time and at scale Experience with stream-processing systems like Storm, Spark-Streaming, Flink etc.. Experience with object-oriented/object function scripting languagesScala, Java, etc. Develop software systems using test driven development employing CI/CD practices Partner with other engineers and team members to develop software that meets business needs Follow Agile methodology for software development and technical documentation Good to have banking/finance domain knowledge Strong written and oral communication, presentation and interpersonal skills. Exceptional analytical, conceptual, and problem-solving abilities Able to prioritize and execute tasks in a high-pressure environment Experience working in a team-oriented, collaborative environment 8-10 years of hand on coding experience Proficient in Java, with a good knowledge of its ecosystems Experience with writing Spark code using scala language Experience with BigData tools like Sqoop, Hive, Pig, Hue Solid understanding of object-oriented programming and HDFS concepts Familiar with various design and architectural patterns Experience with big data toolsHadoop, Spark, Kafka, fink, Hive, Sqoop etc. Experience with relational SQL and NoSQL databases like MySQL, PostgreSQL, Mongo dB and Cassandra Experience with data pipeline tools like Airflow, etc. Experience with AWS cloud servicesEC2, S3, EMR, RDS, Redshift, BigQuery Experience with stream-processing systemsStorm, Spark-Streaming, Flink etc. Experience with object-oriented/object function scripting languagesPython, Java, Scala, etc. Expertise in design / developing platform components like caching, messaging, event processing, automation, transformation and tooling frameworks check(event) ; career-website-detail-template-2 => apply(record.id,meta)" mousedown="lyte-button => check(event)" final-style="background-color:#2B39C2;border-color:#2B39C2;color:white;" final-class="lyte-button lyteBackgroundColorBtn lyteSuccess" lyte-rendered=""> I'm interested
Posted 2 weeks ago
10.0 - 15.0 years
25 - 40 Lacs
Mumbai
Work from Office
Overview of the Company: Jio Platforms Ltd. is a revolutionary Indian multinational tech company, often referred to as India's biggest startup, headquartered in Mumbai. Launched in 2019, it's the powerhouse behind Jio, India's largest mobile network with over 400 million users. But Jio Platforms is more than just telecom. It's a comprehensive digital ecosystem, developing cutting-edge solutions across media, entertainment, and enterprise services through popular brands like JioMart, JioFiber, and JioSaavn. Join us at Jio Platforms and be part of a fast-paced, dynamic environment at the forefront of India's digital transformation. Collaborate with brilliant minds to develop next-gen solutions that empower millions and revolutionize industries. Team Overview: The Data Platforms Team is the launchpad for a data-driven future, empowering the Reliance Group of Companies. We're a passionate group of experts architecting an enterprise-scale data mesh to unlock the power of big data, generative AI, and ML modelling across various domains. We don't just manage data we transform it into intelligent actions that fuel strategic decision-making. Imagine crafting a platform that automates data flow, fuels intelligent insights, and empowers the organization that's what we do. Join our collaborative and innovative team, and be a part of shaping the future of data for India's biggest digital revolution! About the role. Title : Lead Data Engineer Location: Mumbai Responsibilities: End-to-End Data Pipeline Development: Design, build, optimize, and maintain robust data pipelines across cloud, on-premises, or hybrid environments, ensuring performance, scalability, and seamless data flow. Reusable Components & Frameworks: Develop reusable data pipeline components and contribute to the team's data pipeline framework evolution. Data Architecture & Solutions: Contribute to data architecture design, applying data modelling, storage, and retrieval expertise. Data Governance & Automation: Champion data integrity, security, and efficiency through metadata management, automation, and data governance best practices. Collaborative Problem Solving: Partner with stakeholders, data teams, and engineers to define requirements, troubleshoot, optimize, and deliver data-driven insights. Mentorship & Knowledge Transfer: Guide and mentor junior data engineers, fostering knowledge sharing and professional growth. Qualification Details: Education: Bachelor's degree or higher in Computer Science, Data Science, Engineering, or a related technical field. Core Programming: Excellent command of a primary data engineering language (Scala, Python, or Java) with a strong foundation in OOPS and functional programming concepts. Big Data Technologies: Hands-on experience with data processing frameworks (e.g., Hadoop, Spark, Apache Hive, NiFi, Ozone, Kudu), ideally including streaming technologies (Kafka, Spark Streaming, Flink, etc.). Database Expertise: Excellent querying skills (SQL) and strong understanding of relational databases (e.g., MySQL, PostgreSQL). Experience with NoSQL databases (e.g., MongoDB, Cassandra) is a plus. End-to-End Pipelines: Demonstrated experience in implementing, optimizing, and maintaining complete data pipelines, integrating varied sources and sinks including streaming real-time data. Cloud Expertise: Knowledge of Cloud Technologies like Azure HDInsights, Synapse, EventHub and GCP DataProc, Dataflow, BigQuery. CI/CD Expertise: Experience with CI/CD methodologies and tools, including strong Linux and shell scripting skills for automation. Desired Skills & Attributes: Problem-Solving & Troubleshooting: Proven ability to analyze and solve complex data problems, troubleshoot data pipeline issues effectively. Communication & Collaboration: Excellent communication skills, both written and verbal, with the ability to collaborate across teams (data scientists, engineers, stakeholders). Continuous Learning & Adaptability: A demonstrated passion for staying up-to-date with emerging data technologies and a willingness to adapt to new tools.
Posted 2 weeks ago
3.0 - 7.0 years
17 - 20 Lacs
Bengaluru
Work from Office
Job Title :Industry & Function AI Data Engineer + S&C GN Management Level :09 - Consultant Location :Primary - Bengaluru, Secondary - Gurugram Must-Have Skills :Data Engineering expertise, Cloud platforms:AWS, Azure, GCP, Proficiency in Python, SQL, PySpark and ETL frameworks Good-to-Have Skills :LLM Architecture, Containerization tools:Docker, Kubernetes, Real-time data processing tools:Kafka, Flink, Certifications like AWS Certified Data Analytics Specialty, Google Professional Data Engineer,Snowflake,DBT,etc. Job Summary : As a Data Engineer, you will play a critical role in designing, implementing, and optimizing data infrastructure to power analytics, machine learning, and enterprise decision-making. Your work will ensure high-quality, reliable data is accessible for actionable insights. This involves leveraging technical expertise, collaborating with stakeholders, and staying updated with the latest tools and technologies to deliver scalable and efficient data solutions. Roles & Responsibilities: Build and Maintain Data Infrastructure:Design, implement, and optimize scalable data pipelines and systems for seamless ingestion, transformation, and storage of data. Collaborate with Stakeholders:Work closely with business teams, data analysts, and data scientists to understand data requirements and deliver actionable solutions. Leverage Tools and Technologies:Utilize Python, SQL, PySpark, and ETL frameworks to manage large datasets efficiently. Cloud Integration:Develop secure, scalable, and cost-efficient solutions using cloud platforms such as Azure, AWS, and GCP. Ensure Data Quality:Focus on data reliability, consistency, and quality using automation and monitoring techniques. Document and Share Best Practices:Create detailed documentation, share best practices, and mentor team members to promote a strong data culture. Continuous Learning:Stay updated with the latest tools and technologies in data engineering through professional development opportunities. Professional & Technical Skills: Strong proficiency in programming languages such as Python, SQL, and PySpark Experience with cloud platforms (AWS, Azure, GCP) and their data services Familiarity with ETL frameworks and data pipeline design Strong knowledge of traditional statistical methods, basic machine learning techniques. Knowledge of containerization tools (Docker, Kubernetes) Knowing LLM, RAG & Agentic AI architecture Certification in Data Science or related fields (e.g., AWS Certified Data Analytics Specialty, Google Professional Data Engineer) Additional Information: The ideal candidate has a robust educational background in data engineering or a related field and a proven track record of building scalable, high-quality data solutions in the Consumer Goods sector. This position offers opportunities to design and implement cutting-edge data systems that drive business transformation, collaborate with global teams to solve complex data challenges and deliver measurable business outcomes and enhance your expertise by working on innovative projects utilizing the latest technologies in cloud, data engineering, and AI. About Our Company | Qualifications Experience :Minimum 3-7 years in data engineering or related fields, with a focus on the Consumer Goods Industry Educational Qualification :Bachelors or Masters degree in Computer Science, Information Systems, Engineering, or a related field
Posted 2 weeks ago
4.0 - 8.0 years
5 - 9 Lacs
Hyderabad, Bengaluru
Work from Office
Whats in it for you? Pay above market standards The role is going to be contract based with project timelines from 2 12 months, or freelancing Be a part of an Elite Community of professionals who can solve complex AI challenges Work location could be: Remote (Highly likely) Onsite on client location Deccan AIs Office: Hyderabad or Bangalore Responsibilities: Design and architect enterprise-scale data platforms, integrating diverse data sources and tools Develop real-time and batch data pipelines to support analytics and machine learning Define and enforce data governance strategies to ensure security, integrity, and compliance along with optimizing data pipelines for high performance, scalability, and cost efficiency in cloud environments Implement solutions for real-time streaming data (Kafka, AWS Kinesis, Apache Flink) and adopt DevOps/DataOps best practices Required Skills: Strong experience in designing scalable, distributed data systems and programming (Python, Scala, Java) with expertise in Apache Spark, Hadoop, Flink, Kafka, and cloud platforms (AWS, Azure, GCP) Proficient in data modeling, governance, warehousing (Snowflake, Redshift, BigQuery), and security/compliance standards (GDPR, HIPAA) Hands-on experience with CI/CD (Terraform, Cloud Formation, Airflow, Kubernetes) and data infrastructure optimization (Prometheus, Grafana) Nice to Have: Experience with graph databases, machine learning pipeline integration, real-time analytics, and IoT solutions Contributions to open-source data engineering communities What are the next steps? Register on our Soul AI website
Posted 3 weeks ago
4.0 - 8.0 years
13 - 17 Lacs
Hyderabad, Bengaluru
Work from Office
Responsibilities: Design and architect enterprise-scale data platforms, integrating diverse data sources and tools Develop real-time and batch data pipelines to support analytics and machine learning Define and enforce data governance strategies to ensure security, integrity, and compliance along with optimizing data pipelines for high performance, scalability, and cost efficiency in cloud environments Implement solutions for real-time streaming data (Kafka, AWS Kinesis, Apache Flink) and adopt DevOps/DataOps best practices Required Skills: Strong experience in designing scalable, distributed data systems and programming (Python, Scala, Java) with expertise in Apache Spark, Hadoop, Flink, Kafka, and cloud platforms (AWS, Azure, GCP) Proficient in data modeling, governance, warehousing (Snowflake, Redshift, Big Query), and security/compliance standards (GDPR, HIPAA) Hands-on experience with CI/CD (Terraform, Cloud Formation, Airflow, Kubernetes) and data infrastructure optimization (Prometheus, Grafana) Nice to Have: Experience with graph databases, machine learning pipeline integration, real-time analytics, and IoT solutions Contributions to open-source data engineering communities
Posted 3 weeks ago
5.0 - 8.0 years
0 - 0 Lacs
Pune
Work from Office
Experience:-5-8 yrs Location :- Hyderabad Notice-Period:- Immediate - 30 days only Job Description:- Experience between 6 to 8 years MUST have Proficiency in Java programming language Experience and Strong knowledge Apache Flink Experience with Apache Airflow Knowledge of containerization and orchestration tools eg Docker Kubernetes Familiarity with cloud platforms eg AWS GCP Azure is a plus Familiarity with Python programming language Experience with Front-end development Mandatory Skills : Apache Airflow , Hibernate, Java ,Java SpringCloud,Microservices,Spring,Spring Security,SpringBoot,SpringMVC,Spring Integration,SpringCloud ****Make sure that Java Apache Airflow experience should be mentioned on your CV****
Posted 3 weeks ago
7.0 - 12.0 years
10 - 14 Lacs
Pune
Work from Office
Project Role : Application Lead Project Role Description : Lead the effort to design, build and configure applications, acting as the primary point of contact. Must have skills : Apache Kafka Good to have skills : Spring BootMinimum 7.5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As an Application Lead, you will lead the effort to design, build, and configure applications, acting as the primary point of contact. Your day will involve overseeing the application development process and ensuring successful project delivery. Roles & Responsibilities:- Design and develop high-performance microservices based framework using Java and Spring Boot- Implement event-driven architecture using technologies like Apache Kafka and Apache Flink- Collaborate with cross-functional teams to integrate microservices with other systems- Ensure the scalability, reliability, and performance of backend services- Stay up-to-date with the latest trends and technologies in the Java and Spring ecosystem- Ensure timely project delivery Professional & Technical Skills: - Must To Have Skills: Proficiency in Spring Boot, Java 8+, Microservices, Event Driven Architecture- Knowledge on Java Enterprise and microservice design patterns- Strong understanding of distributed systems- Experience in microservices architecture- Knowledge of event-driven architecture- Hands-on experience in designing and implementing scalable applications Additional Information:- The candidate should have a minimum of 7+ years of experience- Familiarity with Kubernetes, Docker, CI/CD tools and AWS cloud is desired, but not must- A 15 years full-time education is required Qualification 15 years full time education
Posted 3 weeks ago
7.0 - 12.0 years
0 - 0 Lacs
Chennai
Work from Office
Job Description for Senior Data Engineer at Fynxt Experience Level: 8+ years Job Title: Senior Data Engineer Location: Chennai Job Type: Full Time Job Description: FYNXT is a Singapore based Software Product Development company that provides a Software as a Service (SaaS) platform to digitally transform leading brokerage firms and fund management companies and help them grow their market share. Our industry leading Digital Front office platform has transformed several leading financial institutions in the Forex industry to go fully digital to optimize their operations, cut costs and become more profitable. For more visit: www.fynxt.com Key Responsibilities: Architect & Build Scalable Systems: Design and implement petabyte-scale lakehouse architectures (Apache Iceberg, Delta Lake) to unify data lakes and warehouses. Real-Time Data Engineering: Develop and optimize streaming pipelines using Kafka, Pulsar, and Flink to process structured/unstructured data with low latency. ¢ High-Performance Applications: Leverage Java to build scalable, high-throughput data applications and services. ¢ Modern Data Infrastructure: Leverage modern data warehouses and query engines (Trino, Spark) for sub-second operation and analytics on real-time data. ¢ Database Expertise: Work with RDBMS (PostgreSQL, MySQL, SQL Server) and NoSQL (Cassandra, MongoDB) systems to manage diverse data workloads. ¢ Data Governance: Ensure data integrity, security, and compliance across multi-tenant systems. ¢ Cost & Performance Optimization: Manage production infrastructure for reliability, scalability, and cost efficiency. ¢ Innovation: Stay ahead of trends in the data ecosystem (e.g., Open Table Formats, stream processing) to drive technical excellence. ¢ API Development (Optional): Build and maintain Web APIs (REST/GraphQL) to expose data services internally and externally. Simptra Technologies Pvt. Ltd. hr@fynxt.com www.fynxt.com Qualifications: ¢ 8+ years of data engineering experience with large-scale systems (petabyte-level). ¢ Expert proficiency in Java for data-intensive applications. ¢ Hands-on experience with lakehouse architectures, stream processing (Flink), and event streaming (Kafka/Pulsar). ¢ Strong SQL skills and familiarity with RDBMS/NoSQL databases. ¢ Proven track record in optimizing query engines (e.g., Spark, Presto) and data pipelines. ¢ Knowledge of data governance, security frameworks, and multi-tenant systems. ¢ Experience with cloud platforms (AWS, GCP, Azure) and infrastructure-as-code (Terraform). What we offer? ¢ Unique experience in Fin-Tech industry, with a leading, fast-growing company. ¢ Good atmosphere at work and a comfortable working environment. ¢ Additional benefit of Group Health Insurance - OPD Health Insurance ¢ Coverage for Self + Family (Spouse and up to 2 Children) ¢ Attractive Leave benefits like Maternity, Paternity Benefit, Vacation leave & Leave Encashment ¢ Reward & Recognition Monthly, Quarterly, Half yearly & yearly. ¢ Loyalty benefits ¢ Employee referral program Simptra Technologies Pvt. Ltd. hr@fynxt.com www.fynxt.com
Posted 3 weeks ago
3.0 - 8.0 years
5 - 10 Lacs
Pune
Work from Office
Project Role : Application Lead Project Role Description : Lead the effort to design, build and configure applications, acting as the primary point of contact. Must have skills : Apache Spark Good to have skills : NA Minimum 3 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As an Application Lead, you will lead the effort to design, build, and configure applications, acting as the primary point of contact. You will be responsible for overseeing the entire application development process and ensuring its successful implementation. Roles & Responsibilities: Expected to perform independently and become an SME. Required active participation/contribution in team discussions. Contribute in providing solutions to work-related problems. Lead the design, development, and implementation of applications. Collaborate with cross-functional teams to gather and analyze requirements. Ensure the applications meet quality standards and are delivered on time. Provide technical guidance and mentorship to junior team members. Stay updated with the latest industry trends and technologies. Identify and resolve any issues or bottlenecks in the application development process. Professional & Technical Skills: Must To Have Skills:Proficiency in Apache Spark. Strong understanding of distributed computing and parallel processing. Experience with big data processing frameworks like Hadoop or Apache Flink. Hands-on experience with programming languages like Java or Scala. Knowledge of database systems and SQL. Good To Have Skills:Experience with cloud platforms like AWS or Azure. Additional Information: The candidate should have a minimum of 3 years of experience in Apache Spark. This position is based at our Pune office. A 15 years full-time education is required. Qualifications 15 years full time education
Posted 3 weeks ago
2.0 - 7.0 years
10 - 14 Lacs
Bengaluru
Work from Office
Project Role : Application Lead Project Role Description : Lead the effort to design, build and configure applications, acting as the primary point of contact. Must have skills : Apache Spark Good to have skills : NAMinimum 7.5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As an Application Lead, you will lead the effort to design, build, and configure applications, acting as the primary point of contact. You will be responsible for overseeing the entire application development process and ensuring its successful implementation. Roles & Responsibilities:- Expected to perform independently and become an SME.- Required active participation/contribution in team discussions.- Contribute in providing solutions to work-related problems.- Lead the design, development, and implementation of applications.- Collaborate with cross-functional teams to gather and analyze requirements.- Ensure the applications meet quality standards and are delivered on time.- Provide technical guidance and mentorship to junior team members.- Stay updated with the latest industry trends and technologies.- Identify and resolve any issues or bottlenecks in the application development process. Professional & Technical Skills: - Must To Have Skills: Proficiency in Apache Spark.- Strong understanding of distributed computing principles.- Experience with big data processing frameworks like Hadoop or Apache Flink.- Knowledge of programming languages such as Java or Scala.- Hands-on experience with data processing and analysis using Spark SQL.- Good To Have Skills: Familiarity with cloud platforms like AWS or Azure. Additional Information:- The candidate should have a minimum of 2 years of experience in Apache Spark.- This position is based at our Chennai office.- A 15 years full-time education is required. Qualification 15 years full time education
Posted 4 weeks ago
1.0 - 5.0 years
27 - 32 Lacs
Karnataka
Work from Office
As a global leader in cybersecurity, CrowdStrike protects the people, processes and technologies that drive modern organizations Since 2011, our mission hasnt changed "” were here to stop breaches, and weve redefined modern security with the worlds most advanced AI-native platform We work on large scale distributed systems, processing almost 3 trillion events per day We have 3.44 PB of RAM deployed across our fleet of C* servers and this traffic is growing daily Our customers span all industries, and they count on CrowdStrike to keep their businesses running, their communities safe and their lives moving forward Were also a mission-driven company We cultivate a culture that gives every CrowdStriker both the flexibility and autonomy to own their careers Were always looking to add talented CrowdStrikers to the team who have limitless passion, a relentless focus on innovation and a fanatical commitment to our customers, our community and each other Ready to join a mission that mattersThe future of cybersecurity starts with you. About The Role The charter of the Data + ML Platform team is to harness all the data that is ingested and cataloged within the Data LakeHouse for exploration, insights, model development, ML model development lifecycle, ML engineering, and Insights Activation This team is situated within the larger Data Platform group, which serves as one of the core pillars of our company We processdata at a truly immense scale The data sets we process are composed of various facets including telemetry data, associated metadata, IT asset information, contextual formation about threat exposure, and many more These facets comprise the overall data platform, which is currently over 200 PB and maintained in a hyper scale Data Lakehouse. We are seeking a strategic and technically savvy leader to head our Data and ML Platform team As the head, you will be responsible for defining and building our ML Experimentation Platform from the ground up, while scaling our data and ML infrastructure to support various roles including Data Platform Engineers, Data Scientists, and Threat Analysts Your key responsibilities will involve overseeing the design, implementation, and maintenance of scalable ML pipelines for data preparation, cataloging, feature engineering, model training, model serving, and in-field model performance monitoring These efforts will directly influence critical business decisions In this role, you'll foster a production-focused culture that effectively bridgesthe gap between model development and operational success Furthermore, you'll be at the forefront of spearheading our ongoing Generative AI investments The ideal candidate for this position will combine strategic vision with hands-on technical expertise in machine learning and data infrastructure, driving innovation and excellence across our data and ML initiatives We are building this team with ownership at Bengaluru, India, this leader will help us boot strap the entire site, starting with this team. What You'll Do Strategic Leadership Define the vision, strategy and roadmap for the organizations data and ML platform to align with critical business goals. Help design, build, and facilitate adoption of a modern Data+ML platform Stay updated on emerging technologies and trends in data platform, ML Ops and AI/ML Team Management Build a team of Data and ML Platform engineers from a small footprint across multiple geographies Foster a culture of innovation and strong customer commitment for both internal and external stakeholders Platform Development Oversee the design and implementation of a platform containing data pipelines, feature stores and model deployment frameworks. Develop and enhance ML Ops practices to streamline model lifecycle Management from development to production. Data Governance Institute best practices for data security, compliance and quality to ensure safe and secure use of AI/ML models. Stakeholder engagement Partner with product, engineering and data science teams to understand requirements and translate them into platform capabilities. Communicate progress and impact to executive leadership and key stakeholders. Operational Excellence Establish SLI/SLO metrics for Observability of the Data and ML Platform along with alerting to ensure a high level of reliability and performance. Drive continuous improvement through data-driven insights and operational metrics. What You'll Need S 10+ years experience in data engineering, ML platform development, or related fields with at least 5 years in a leadership role. Familiarity with typical machine learning algorithms from an engineering perspective; familiarity with supervised / unsupervised approacheshow, why and when labeled data is created and used. Knowledge of ML Platform tools like Jupyter Notebooks, NVidia Workbench, MLFlow, Ray, Vertex AI, etc. Experience with modern ML Ops platforms such as MLFLow, Kubeflow or SageMaker preferred.Experience in data platform product(s) and frameworks like Apache Spark, Flink or comparable tools in GCP and orchestration technologies (e.g Kubernetes, Airflow) Experience with Apache Iceberg is a plus. Deep understanding of machine learning workflows, including model training, deployment and monitoring. Familiarity with data visualization tools and techniques. Experience with boot strapping new teams and growing them to make a large impact. Experience operating as a site lead within a company will be a bonus. Exceptional interpersonal and communication skills Work with stakeholders across multiple teams and synthesize their needs into software interfaces and processes. Benefits Of Working At CrowdStrike Remote-friendly and flexible work culture Market leader in compensation and equity awards Comprehensive physical and mental wellness programs Competitive vacation and holidays for recharge Paid parental and adoption leaves Professional development opportunities for all employees regardless of level or role s, geographic neighbourhood groups and volunteer opportunities to build connections Vibrant office culture with world class amenities Great Place to Work Certified„¢ across the globe CrowdStrike is proud to be an equal opportunity employer We are committed to fostering a culture of belonging where everyone is valued for who they are and empowered to succeed We support veterans and individuals with disabilities through our affirmative action program. CrowdStrike is committed to providing equal employment opportunity for all employees and applicants for employment The Company does not discriminate in employment opportunities or practices on the basis of race, color, creed, ethnicity, religion, sex (including pregnancy or pregnancy-related medical conditions), sexual orientation, gender identity, marital or family status, veteran status, age, national origin, ancestry, physical disability (including HIV and AIDS), mental disability, medical condition, genetic information, membership or activity in a local human rights commission, status with regard to public assistance, or any other characteristic protected by law We base all employment decisions--including recruitment, selection, training, compensation, benefits, discipline, promotions, transfers, lay-offs, return from lay-off, terminations and social/recreational programs--on valid job requirements. If you need assistance accessing or reviewing the information on this website or need help submitting an application for employment or requesting an accommodation, please contact us at recruiting@crowdstrike.com for further assistance. Show more Show less
Posted 4 weeks ago
8.0 - 10.0 years
12 - 17 Lacs
Hyderabad, Chennai, Bengaluru
Work from Office
Role & responsibilities Design develop data pipelines for realtime and batch data ingestion and processing using Confluent Kafka ksqlDB Kafka Connect and Apache Flink Build and configure Kafka Connectors to ingest data from various sources databases APIs message queues etc into Kafka Develop Flink applications for complex event processing stream enrichment and realtime analytics Develop and optimize ksqlDB queries for realtime data transformations aggregations and filtering Implement data quality checks and monitoring to ensure data accuracy and reliability throughout the pipeline Monitor and troubleshoot data pipeline performance identify bottlenecks and implement optimizations Automate data pipeline deployment monitoring and maintenance tasks Stay uptodate with the latest advancements in data streaming technologies and best practices Contribute to the development of data engineering standards and best practices within the organization Participate in code reviews and contribute to a collaborative and supportive team environment Work closely with other architects and tech leads in India US and create POCs and MVPs Provide regular updates on the tasks status and risks to project manager Preferred candidate profile Bachelors degree or higher from a reputed university 8 to 10 years total experience with majority of that experience related to ETLELT big data Kafka etc Proficiency in developing Flink applications for stream processing and realtime analytics Strong understanding of data streaming concepts and architectures Extensive experience with Confluent Kafka including Kafka Brokers Producers Consumers and Schema Registry Handson experience with ksqlDB for realtime data transformations and stream processing Experience with Kafka Connect and building custom connectors Extensive experience in implementing large scale data ingestion and curation solutions Good hands on experience in big data technology stack with any cloud platform Excellent problemsolving analytical and communication skills Ability to work independently and as part of a team Good to have Experience in Google Cloud Healthcare industry experience Experience in Agile Skills Mandatory Skills : AWS Kinesis,Java,Kafka,Python,AWS Glue,AWS Lambda,AWS S3,Scala,Apache SparkStreaming,ANSI-SQL"
Posted 4 weeks ago
5.0 - 10.0 years
15 - 19 Lacs
Bengaluru
Work from Office
Job Summary Join Synechron as a DevSecOps Engineer, a pivotal role designed to enhance our software release lifecycle through robust automation and security practices. As a DevSecOps Engineer, you will contribute significantly to our business objectives by ensuring high-performance and secure infrastructure, facilitating seamless software deployment, and driving innovation within cloud environments. Software Requirements Required: Proficiency in CI/CD tools such as Jenkins, CodePipeline, CodeBuild, CodeCommit Hands-on experience with DevSecOps practices Automation scripting languagesShell scripting, Python (or similar) Preferred: Familiarity with streaming technologiesKafka, AWS Kinesis, AWS Kinesis Data Firehose, Flink Overall Responsibilities Manage the entire software release lifecycle, focusing on build automation and production deployment. Maintain and optimize CI/CD pipelines to ensure reliable software releases across environments. Engage in platform lifecycle improvement from design to deployment, refining processes for operational excellence. Provide pre-go-live support including system design consulting, capacity planning, and launch reviews. Implement and enforce best practices to optimize performance, reliability, security, and cost efficiency. Enable scalable systems through automation and advocate for changes enhancing reliability and speed. Lead priority incident response and conduct blameless postmortems for continuous improvement. Technical Skills (By Category) Programming Languages: RequiredShell scripting, Python PreferredOther automation scripting languages Cloud Technologies: RequiredExperience with cloud design and best practices, particularly AWS Development Tools and Methodologies: RequiredCI/CD tools (Jenkins, CodePipeline, CodeBuild, CodeCommit) PreferredExposure to streaming technologies (Kafka, AWS Kinesis) Security Protocols: RequiredDevSecOps practices Experience Requirements Minimum of 7+ years in infrastructure performance and cloud design roles Proven experience with architecture and design at scale Industry experience in technology or software development environments preferred Alternative pathwaysDemonstrated experience in similar roles across other sectors Day-to-Day Activities Engage in regular collaboration with cross-functional teams to refine deployment strategies Conduct regular system health checks and implement monitoring solutions Participate in strategic meetings to discuss and implement best practices for system reliability Manage deliverables related to software deployment and automation projects Exercise decision-making authority in incident management and system improvement discussions Qualifications Bachelor’s degree in Computer Science, Engineering, or related field (or equivalent experience) Certifications in DevOps, AWS, or related fields preferred Commitment to continuous learning and professional development in evolving technologies Professional Competencies Critical thinking and problem-solving capabilities to address complex infrastructure challenges Strong leadership and teamwork abilities to foster collaborative environments Excellent communication skills for effective stakeholder management and technical guidance Adaptability to rapidly changing technology landscapes and proactive learning orientation Innovation mindset to drive improvements and efficiencies within cloud environments Effective time and priority management to balance multiple projects and objectives
Posted 4 weeks ago
10.0 - 12.0 years
9 - 13 Lacs
Chennai
Work from Office
Job Title Data Architect Experience 10-12 Years Location Chennai : 10-12 years experience as Data Architect Strong expertise in streaming data technologies like Apache Kafka, Flink, Spark Streaming, or Kinesis. ProficiencyinprogramminglanguagessuchasPython,Java,Scala,orGo ExperiencewithbigdatatoolslikeHadoop,Hive,anddatawarehousessuchas Snowflake,Redshift,Databricks,MicrosoftFabric. Proficiencyindatabasetechnologies(SQL,NoSQL,PostgreSQL,MongoDB,DynamoDB,YugabyteDB). Should be flexible to work as anIndividual contributor
Posted 1 month ago
10.0 - 20.0 years
10 - 20 Lacs
Bengaluru
Work from Office
Job Title: Senior Data Engineer Location: India preferably Bengaluru Experience Level: 10+ years Employment Type: Full-Time Job Summary: Our organization is seeking a highly experienced and technically proficient Senior Data Engineer with over 10 years of experience in designing, building, and optimizing data pipelines and applications in big data environments. The ideal candidate must have strong hands-on experience in workflow orchestration, data processing, and streaming platforms, and possess full-stack development capabilities. Key Responsibilities: 1. Design, build, and maintain scalable and reliable data pipelines using Apache Airflow. 2. Develop and optimize big data workflows using Apache Spark, Hive , and Apache Flink . 3. Lead the implementation and integration of Apache Kafka for real-time and batch data processing. 4. Apply strong Java full-stack development skills to build and support data-driven applications. 5. Utilize Python to develop scripts, utilities, and support data workflows and integrations. 6. Work closely with data scientists, analysts, and platform engineers to support a high- volume,high-velocity data environment. 7. Drive performance tuning, monitoring, and troubleshooting across the data stack. 8. Ensure data integrity, security, and governance across all processing layers. 9. Mentor junior engineers and contribute to technical decision-making processes. Required Skills and Experience: Minimum 10 years of experience in data engineering or related fields. Proven experience with Apache Airflow for orchestration. Deep expertise in Apache Spark, Hive, and Apache Flink . Mandatory experience as a Full Stack Java Developer. Proficiency in Python programming for data engineering tasks. Demonstrated experience in Apache Kafka development and implementation. Prior hands-on experience in a Big Data ecosystem involving distributed systems and large-scale data processing. Strong understanding of data modeling, ETL/ELT design , and streaming architectures. Excellent problem-solving, communication, and collaboration skills. Preferred Qualifications: Experience working in cloud-based environments (e.g., AWS, Azure, GCP ). Familiarity with containerization and orchestration tools (e.g., Docker, Kubernetes). Exposure to CI/CD pipelines and DevOps practices in data projects.
Posted 1 month ago
2.0 - 5.0 years
7 - 15 Lacs
Hyderabad
Work from Office
We are looking for a Data Engineer with the tech stack of Python, Pandas, Postgres, Java, and Apache Flink. Must have experience in Python, Java in the data ingestion pipeline with Apache Flink.
Posted 1 month ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
19947 Jobs | Dublin
Wipro
9475 Jobs | Bengaluru
EY
7894 Jobs | London
Accenture in India
6317 Jobs | Dublin 2
Amazon
6141 Jobs | Seattle,WA
Uplers
6077 Jobs | Ahmedabad
Oracle
5820 Jobs | Redwood City
IBM
5736 Jobs | Armonk
Tata Consultancy Services
3644 Jobs | Thane
Capgemini
3598 Jobs | Paris,France