Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
3.0 - 5.0 years
4 - 8 Lacs
Hyderabad
Work from Office
Job Information Job Opening ID ZR_1673_JOB Date Opened 20/12/2022 Industry Technology Job Type Work Experience 3-5 years Job Title Senior DevOps Engineer City Hyderabad Province Telangana Country India Postal Code 500001 Number of Positions 4 Roles & Responsibilities: 3+ years of working experience in data engineering. Hands-on keyboard' AWS implementation experience across a broad range of AWS services. Must have in depth AWS development experience (Containerization - Docker, Amazon EKS, Lambda, EC2, S3, Amazon DocumentDB, PostgreSQL) Strong knowledge of DevOps and CI/CD pipeline (GitHub, Jenkins, Artifactory) Scripting capability and the ability to develop AWS environments as code Hands-on AWS experience with at least 1 implementation (preferred in an Enterprise scale environment) Experience with core AWS platform architecture, including areas such asOrganizations, Account Design, VPC, Subnet, segmentation strategies. Backup and Disaster Recovery approach and design Environment and application automation CloudFormation and third-party automation approach/strategy Network connectivity, Direct Connect and VPN AWS Cost Management and Optimization Skilled experience in Python libraries (NumPy, Pandas dataframe) check(event) ; career-website-detail-template-2 => apply(record.id,meta)" mousedown="lyte-button => check(event)" final-style="background-color:#2B39C2;border-color:#2B39C2;color:white;" final-class="lyte-button lyteBackgroundColorBtn lyteSuccess" lyte-rendered=""> I'm interested
Posted 3 weeks ago
6.0 - 10.0 years
3 - 7 Lacs
Bengaluru
Work from Office
Job Information Job Opening ID ZR_2470_JOB Date Opened 03/05/2025 Industry IT Services Job Type Work Experience 6-10 years Job Title Sr. Data Engineer City Bangalore South Province Karnataka Country India Postal Code 560050 Number of Positions 1 Were looking for an experienced Senior Data Engineer to lead the design and development of scalable data solutions at our company. The ideal candidate will have extensive hands-on experience in data warehousing, ETL/ELT architecture, and cloud platforms like AWS, Azure, or GCP. You will work closely with both technical and business teams, mentoring engineers while driving data quality, security, and performance optimization. Responsibilities: Lead the design of data warehouses, lakes, and ETL workflows. Collaborate with teams to gather requirements and build scalable solutions. Ensure data governance, security, and optimal performance of systems. Mentor junior engineers and drive end-to-end project delivery.: 6+ years of experience in data engineering, including at least 2 full-cycle datawarehouse projects. Strong skills in SQL, ETL tools (e.g., Pentaho, dbt), and cloud platforms. Expertise in big data tools (e.g., Apache Spark, Kafka). Excellent communication skills and leadership abilities.PreferredExperience with workflow orchestration tools (e.g., Airflow), real-time data,and DataOps practices. check(event) ; career-website-detail-template-2 => apply(record.id,meta)" mousedown="lyte-button => check(event)" final-style="background-color:#2B39C2;border-color:#2B39C2;color:white;" final-class="lyte-button lyteBackgroundColorBtn lyteSuccess" lyte-rendered=""> I'm interested
Posted 3 weeks ago
5.0 - 8.0 years
2 - 6 Lacs
Pune
Work from Office
Job Information Job Opening ID ZR_2098_JOB Date Opened 13/01/2024 Industry Technology Job Type Contract Work Experience 5-8 years Job Title DCT Data Engineer City Pune City Province Maharashtra Country India Postal Code 411001 Number of Positions 4 LocationsPune, Bangalore, Indore Work modeWork from Office Informatica data quality - idq Azure databricks Azure data lake Azure Data Factory Api integration check(event) ; career-website-detail-template-2 => apply(record.id,meta)" mousedown="lyte-button => check(event)" final-style="background-color:#2B39C2;border-color:#2B39C2;color:white;" final-class="lyte-button lyteBackgroundColorBtn lyteSuccess" lyte-rendered=""> I'm interested
Posted 3 weeks ago
5.0 - 8.0 years
2 - 6 Lacs
Bengaluru
Work from Office
Job Information Job Opening ID ZR_1628_JOB Date Opened 09/12/2022 Industry Technology Job Type Work Experience 5-8 years Job Title Data Engineer City Bangalore Province Karnataka Country India Postal Code 560001 Number of Positions 4 Roles and Responsibilities: 4+ years of experience as a data developer using Python Knowledge in Spark, PySpark preferable but not mandatory Azure Cloud experience (preferred) Alternate Cloud experience is fine preferred experience in Azure platform including Azure data Lake, data Bricks, data Factory Working Knowledge on different file formats such as JSON, Parquet, CSV, etc. Familiarity with data encryption, data masking Database experience in SQL Server is preferable preferred experience in NoSQL databases like MongoDB Team player, reliable, self-motivated, and self-disciplined check(event) ; career-website-detail-template-2 => apply(record.id,meta)" mousedown="lyte-button => check(event)" final-style="background-color:#2B39C2;border-color:#2B39C2;color:white;" final-class="lyte-button lyteBackgroundColorBtn lyteSuccess" lyte-rendered=""> I'm interested
Posted 3 weeks ago
10.0 - 15.0 years
12 - 17 Lacs
Hyderabad
Work from Office
At least 10+ years of proven experience in data analytics and data engineering Experience with SQL Server queries and PL-SQL Experience with Azure data factory Strong expertise in database development and Migration Experience in BODS reverse engineering Experience SAP IQ (Sybase) Database admin Experience in crystal report development Ensuring all work is carried out to the highest quality standards with appropriate detailed documentation Good communication skills, proactive and a team player English language skills in speaking and writing
Posted 3 weeks ago
4.0 - 6.0 years
15 - 20 Lacs
Hyderabad, Pune, Bengaluru
Work from Office
KPI Partners is seeking a highly skilled and experienced GenAI Engineer with a strong background in Data Engineering and Software Development to join our team. The ideal candidate will focus on enhancing our information retrieval and generation capabilities, with specific experience in Azure AI Search, data processing for RAG, multimodal data integration, and familiarity with Databricks. Key Responsibilities: Design, develop, and optimize Retrieval-Augmented Generation models to improve information retrieval and generation processes within our applications. Develop and maintain search solutions using Azure AI Search to ensure efficient and accurate information access Process and prepare data to support RAG workflows, ensuring data quality and relevance. Integrate and manage various data types (e.g., text, images) to enhance retrieval and generation capabilities. Work closely with cross-functional teams to integrate data into our existing retrieval eco-system, ensuring seamless functionality and performance. Ensure the scalability, reliability, and performance of data retrieval in production environments. Stay updated with the latest advancements in AI, ML, and data engineering to drive innovation and maintain a competitive edge. What we’re looking for: Master’s degree in Data Science or a related field is preferred. Approximately 8 years of experience in Data Science, MLOps, and Data Engineering Proven experience in AI and ML solution implementation, particularly in semiconductor manufacturing. Proficiency in Python Proven experience in data engineering and software development, with a focus on building and deploying RAG pipelines or similar information retrieval systems. Familiarity with processing multimodal data (e.g., text, images) for retrieval and generation tasks. Strong understanding of database systems (SQL and NoSQL) and data warehousing solutions. Proficiency in Azure AI, Databricks, and other relevant tools.
Posted 3 weeks ago
4.0 - 7.0 years
6 - 10 Lacs
Hyderabad
Work from Office
Role Description: As part of the cybersecurity organization, In this vital role you will be responsible for designing, building, and maintaining data infrastructure to support data-driven decision-making. This role involves working with large datasets, developing reports, executing data governance initiatives, and ensuring data is accessible, reliable, and efficiently managed. The role sits at the intersection of data infrastructure and business insight delivery, requiring the Data Engineer to design and build robust data pipelines while also translating data into meaningful visualizations for stakeholders across the organization. The ideal candidate has strong technical skills, experience with big data technologies, and a deep understanding of data architecture, ETL processes, and cybersecurity data frameworks. Roles & Responsibilities: Design, develop, and maintain data solutions for data generation, collection, and processing. Be a key team member that assists in design and development of the data pipeline. Build data pipelines and ensure data quality by implementing ETL processes to migrate and deploy data across systems. Develop and maintain interactive dashboards and reports using tools like Tableau, ensuring data accuracy and usability Schedule and manage workflows the ensure pipelines run on schedule and are monitored for failures. Collaborate with multi-functional teams to understand data requirements and design solutions that meet business needs. Develop and maintain data models, data dictionaries, and other documentation to ensure data accuracy and consistency. Implement data security and privacy measures to protect sensitive data. Leverage cloud platforms (AWS preferred) to build scalable and efficient data solutions. Collaborate and communicate effectively with product teams. Collaborate with data scientists to develop pipelines that meet dynamic business needs. Share and discuss findings with team members practicing SAFe Agile delivery model. What we expect of you We are all different, yet we all use our unique contributions to serve patients. The Data engineer professional we seek is one with these qualifications. Basic Qualifications: Masters degree and 1 to 3 years of experience of Computer Science, IT or related field experience OR Bachelors degree and 3 to 5 years of Computer Science, IT or related field experience OR Diploma and 7 to 9 years of Computer Science, IT or related field experience Preferred Qualifications: Hands on experience with data practices, technologies, and platforms, such as Databricks, Python, GitLab, LucidChart, etc. Hands-on experience with data visualization and dashboarding toolsTableau, Power BI, or similar is a plus Proficiency in data analysis tools (e.g. SQL) and experience with data sourcing tools Excellent problem-solving skills and the ability to work with large, complex datasets Understanding of data governance frameworks, tools, and best practices Knowledge of and experience with data standards (FAIR) and protection regulations and compliance requirements (e.g., GDPR, CCPA) Good-to-Have Skills: Experience with ETL tools and various Python packages related to data processing, machine learning model development Strong understanding of data modeling, data warehousing, and data integration concepts Knowledge of Python/R, Databricks, cloud data platforms Experience working in Product team's environment Experience working in an Agile environment Professional Certifications: AWS Certified Data Engineer preferred Databricks Certificate preferred Soft Skills: Initiative to explore alternate technology and approaches to solving problems Skilled in breaking down problems, documenting problem statements, and estimating efforts Excellent analytical and troubleshooting skills Strong verbal and written communication skills Ability to work effectively with global, virtual teams High degree of initiative and self-motivation Ability to handle multiple priorities successfully Team-oriented, with a focus on achieving team goals
Posted 3 weeks ago
6.0 - 11.0 years
8 - 13 Lacs
Hyderabad
Work from Office
We are looking for a Data Engineer with experience in data warehouse projects, strong expertise in Snowflake , and hands-on knowledge of Azure Data Factory (ADF) and dbt (Data Build Tool). Proficiency in Python scripting will be an added advantage. Key Responsibilities: Design, develop, and optimize data pipelines and ETL processes for data warehousing projects. Work extensively with Snowflake, ensuring efficient data modeling, and query optimization. Develop and manage data workflows using Azure Data Factory (ADF) for seamless data integration. Implement data transformations, testing, and documentation using dbt. Collaborate with cross-functional teams to ensure data accuracy, consistency, and security. Troubleshoot data-related issues. (Optional) Utilize Python for scripting, automation, and data processing tasks. Required Skills & Qualifications: Experience in Data Warehousing with a strong understanding of best practices. Hands-on experience with Snowflake (Data Modeling, Query Optimization). Proficiency in Azure Data Factory (ADF) for data pipeline development. Strong working knowledge of dbt (Data Build Tool) for data transformations. (Optional) Experience in Python scripting for automation and data manipulation. Good understanding of SQL and query optimization techniques. Experience in cloud-based data solutions (Azure). Strong problem-solving skills and ability to work in a fast-paced environment. Experience with CI/CD pipelines for data engineering. Why Join Us Opportunity to work on cutting-edge data engineering projects. Work with a highly skilled and collaborative team. Exposure to modern cloud-based data solutions. ------ ------Developer / Software Engineer - One to Three Years,Snowflake - One to Three Years------PSP Defined SCU in Solution Architect
Posted 3 weeks ago
3.0 - 6.0 years
5 - 10 Lacs
Pune
Work from Office
Exp. in working on complex and medium to large projects. candidate will have expertise in data warehousing concepts ETL & data integration. Min of 3-6 years of exp. in AbInitio Ability to work independently. Good exp. in Hadoop Able to work on UNIX
Posted 3 weeks ago
12.0 - 14.0 years
16 - 18 Lacs
Mumbai
Work from Office
Associate Director, Data Engineering (J2EE/Angular/React Full Stack Individual Contributor) About the Role: Grade Level (for internal use): 12 The Team You will be an expert contributor and part of the Rating Organizations Data Services Product Engineering Team. This team, who has a broad and expert knowledge on Ratings organizations critical data domains, technology stacks and architectural patterns, fosters knowledge sharing and collaboration that results in a unified strategy. All Data Services team members provide leadership, innovation, timely delivery, and the ability to articulate business value. Be a part of a unique opportunity to build and evolve S&P Ratings next gen analytics platform. Responsibilities: Architect, design, and implement innovative software solutions to enhance S&P Ratings cloud-based analytics platform. Mentor a team of engineers (as required), fostering a culture of trust, continuous growth, and collaborative problem-solving. Collaborate with business partners to understand requirements, ensuring technical solutions align with business goals. Manage and improve existing software solutions, ensuring high performance and scalability. Participate actively in all Agile scrum ceremonies, contributing to the continuous improvement of team processes. Produce comprehensive technical design documents and conduct technical walkthroughs. Experience & Qualifications: Bachelors degree in computer science, Information Systems, Engineering, equivalent or more is required Proficient with software development lifecycle (SDLC) methodologies like Agile, Test-driven development Total 12+ years of experience with 8+ years designing enterprise products, modern data stacks and analytics platforms 6+ years of hands-on experience contributing to application architecture & designs, proven software/enterprise integration design patterns and full-stack knowledge including modern distributed front end and back-end technology stacks 5+ years full stack development experience in modern web development technologies, Java/J2EE, UI frameworks like Angular, React, SQL, Oracle, NoSQL Databases like MongoDB Exp. with Delta Lake systems like Databricks using AWS cloud technologies and PySpark is a plus Experience designing transactional/data warehouse/data lake and data integrations with Big data eco system leveraging AWS cloud technologies Thorough understanding of distributed computing Passionate, smart, and articulate developer Quality first mindset with a strong background and experience with developing products for a global audience at scale Excellent analytical thinking, interpersonal, oral and written communication skills with strong ability to influence both IT and business partners Superior knowledge of system architecture, object-oriented design, and design patterns. Good work ethic, self-starter, and results-oriented Excellent communication skills are essential, with strong verbal and writing proficiencies Additional Preferred Qualifications: Experience working AWS Experience with SAFe Agile Framework Bachelors/PG degree in Computer Science, Information Systems or equivalent. Hands-on experience contributing to application architecture & designs, proven software/enterprise integration design principles Ability to prioritize and manage work to critical project timelines in a fast-paced environment Excellent Analytical and communication skills are essential, with strong verbal and writing proficiencies Ability to train and mentor
Posted 3 weeks ago
10.0 - 14.0 years
12 - 16 Lacs
Mumbai, Maharastra
Work from Office
About the Role: Grade Level (for internal use): 11 The Team You will be an expert contributor and part of the Rating Organizations Data Services Product Engineering Team. This team, who has a broad and expert knowledge on Ratings organizations critical data domains, technology stacks and architectural patterns, fosters knowledge sharing and collaboration that results in a unified strategy. All Data Services team members provide leadership, innovation, timely delivery, and the ability to articulate business value. Be a part of a unique opportunity to build and evolve S&P Ratings next gen analytics platform. Responsibilities: Architect, design, and implement innovative software solutions to enhance S&P Ratings cloud-based analytics platform. Mentor a team of engineers (as required), fostering a culture of trust, continuous growth, and collaborative problem-solving. Collaborate with business partners to understand requirements, ensuring technical solutions align with business goals. Manage and improve existing software solutions, ensuring high performance and scalability. Participate actively in all Agile scrum ceremonies, contributing to the continuous improvement of team processes. Produce comprehensive technical design documents and conduct technical walkthroughs. Experience & Qualifications: Bachelors degree in computer science, Information Systems, Engineering, equivalent or more is required Proficient with software development lifecycle (SDLC) methodologies like Agile, Test-driven development 10+ years of experience with 4+ years designing/developing enterprise products, modern tech stacks and data platforms 4+ years of hands-on experience contributing to application architecture & designs, proven software/enterprise integration design patterns and full-stack knowledge including modern distributed front end and back-end technology stacks 5+ years full stack development experience in modern web development technologies, Java/J2EE, UI frameworks like Angular, React, SQL, Oracle, NoSQL Databases like MongoDB Experience designing transactional/data warehouse/data lake and data integrations with Big data eco system leveraging AWS cloud technologies Thorough understanding of distributed computing Passionate, smart, and articulate developer Quality first mindset with a strong background and experience with developing products for a global audience at scale Excellent analytical thinking, interpersonal, oral and written communication skills with strong ability to influence both IT and business partners Superior knowledge of system architecture, object-oriented design, and design patterns. Good work ethic, self-starter, and results-oriented Excellent communication skills are essential, with strong verbal and writing proficiencies Exp. with Delta Lake systems like Databricks using AWS cloud technologies and PySpark is a plus Additional Preferred Qualifications: Experience working AWS Experience with SAFe Agile Framework Bachelors/PG degree in Computer Science, Information Systems or equivalent. Hands-on experience contributing to application architecture & designs, proven software/enterprise integration design principles Ability to prioritize and manage work to critical project timelines in a fast-paced environment Excellent Analytical and communication skills are essential, with strong verbal and writing proficiencies Ability to train and mentor
Posted 3 weeks ago
1.0 - 4.0 years
3 - 6 Lacs
Pune
Hybrid
Must have skills required : Python, R, SQL, PowerBI, Spotfire, Hadoop, Hive Good to have skills : Spark, Statistics, Big Data Job Description: You will work with Being part of a digital delivery data group supporting bp Solutions, you will apply your domain knowledge and familiarity with domain data processes to support the organisation. Part of bps Production & Operations business, bp Solutions has hubs in London, Pune, and Houston. The data team provides daily operational data management, data engineering and analytics support to this organisation across a broad range of activity from facilities and subsea engineering to logistics. Let me tell you about the role A data analyst collects, processes, and performs analyses on a variety of datasets. Their key responsibilities include interpreting complex data sets to identify trends and patterns, using analytical tools and methods to generate actionable insights, and creating visualizations and reports to communicate those insights and recommendations to support decision-making. Data analysts collaborate closely with business domain stakeholders to understand their data analysis needs, ensure data accuracy, write and recommend data-driven solutions and solve value impacting business problems. You might be a good fit for this role if you: have strong domain knowledge in at least one of; facilities or subsea engineering, maintenance and reliability, operations, logistics. Strong analytical skills and demonstrable capability in applying analytical techniques and Python scripting to solve practical problems. are curious, and keen to apply new technologies, trends & methods to improve existing standards and the capabilities of the Subsurface community. are well organized and self-motivated, you balance proactive and reactive approaches and across multiple priorities to complete tasks on time. apply judgment and common sense you use insight and good judgment to inform actions and respond to situations as they arise. What you will deliver Be a bridge between asset teams and Technology, combining in-depth understanding of one or more relevant domains with data & analytics skills Provide actionable, data-driven insights by combining deep statistical skills, data manipulation capabilities and business insight. Proactively identify impactful opportunities and autonomously complete data analysis. You apply existing data & analytics strategies relevant to your immediate scope. Clean, pre-process and analyse both structured and unstructured data Develop data visualisations to analyse and interrogate broad datasets (e.g. with tools such as Microsoft PowerBI, Spotfire or similar). Present results to peers and senior management, influencing decision making What you will need to be successful (experience and qualifications) Essential MSc or equivalent experience in a quantitative field, preferably statistics. have strong domain knowledge in at least one of; facilities or subsea engineering, maintenance and reliability, operations, logistics. Hands-on experience carrying out data analytics, data mining and product analytics in complex, fast-paced environments. Applied knowledge of data analytics and data pipelining tools and approaches across all data lifecycle stages. Deep understanding of a few and a high-level understanding of several commonly available statistics approaches. Advanced SQL knowledge. Advanced scripting experience in R or python. Ability to write and maintain moderately complex data pipelines. Customer-centric and pragmatic mindset. Focus on value delivery and swift execution, while maintaining attention to detail. Excellent communication and interpersonal skills, with the ability to effectively communicate ideas, expectations, and feedback to team members, stakeholders, and customers. Foster collaboration and teamwork Desired Advanced analytics degree. Experience applying analytics to support engineering turnarounds Experience with big data technologies (e.g. Hadoop, Hive, and Spark) is a plus.
Posted 3 weeks ago
6.0 - 10.0 years
8 - 12 Lacs
Indore, Hyderabad, Ahmedabad
Work from Office
Role Overview: We are looking for a skilled .NET Backend Developer with Azure Data Engineering expertise to join our dynamic and growing team. This role demands strong hands-on experience in .NET technologies along with cloud-based data engineering platforms like Azure Databricks or Snowflake. Primary Technical Skills (Must-Have): .NET Core / ASP.NET Core / C# Strong backend development Web API & Microservices Architecture SQL Server, NoSQL, Entity Framework (EF 6+) Azure Cloud Platform, Azure Data Engineering Azure Databricks, Microsoft Fabric, or Snowflake Database Performance Tuning & Optimization Strong understanding of OOPs & Design Patterns Agile Methodology Experience Nice to Have (Secondary Skills): Angular / JavaScript Frameworks MongoDB NPM Azure DevOps Build/Release Configuration Strong troubleshooting and communication skills Experience working with US clients is a plus Required Qualifications: B.Tech / B.E / MCA / M.Tech or equivalent Minimum 6+ years of relevant hands-on experience Must be willing to work onsite in Hyderabad Excellent communication (verbal & written)
Posted 3 weeks ago
6.0 - 8.0 years
10 - 14 Lacs
Ahmedabad
Remote
Hiring Senior Azure Data Architect & Performance Engineer (Remote, 6 PM–3 AM IST). Expert in SQL Server, Azure, T-SQL, PowerShell, performance tuning, Oracle to SQL migration, Snowflake. 6–8 yrs exp. Strong DB internals & Azure skills required.
Posted 3 weeks ago
4.0 - 8.0 years
6 - 12 Lacs
Bengaluru
Work from Office
4+yrs Data science,ML frameworks,MLOps, Python, Data engineering,Cloud platforms,Edge computing,Data manipulation libraries,Model development,Object,Experiment, design,Ab testing. Reach me at mailcv108@gmail.com or WhatsApp me at +91 9611702105
Posted 3 weeks ago
2.0 - 7.0 years
15 - 20 Lacs
Hyderabad
Work from Office
Roles and Responsibilities Design, develop, and deploy advanced AI models with a focus on generative AI, including transformer architectures (e.g., GPT, BERT, T5) and other deep learning models used for text, image, or multimodal generation. Work with extensive and complex datasets, performing tasks such as cleaning, preprocessing, and transforming data to meet quality and relevance standards for generative model training. Collaborate with cross-functional teams (e.g., product, engineering, data science) to identify project objectives and create solutions using generative AI tailored to business needs. Implement, fine-tune, and scale generative AI models in production environments, ensuring robust model performance and efficient resource utilization. Develop pipelines and frameworks for efficient data ingestion, model training, evaluation, and deployment, including A/B testing and monitoring of generative models in production. Stay informed about the latest advancements in generative AI research, techniques, and tools, applying new findings to improve model performance, usability, and scalability. Document and communicate technical specifications, algorithms, and project outcomes to technical and non-technical stakeholders, with an emphasis on explainability and responsible AI practices. Qualifications Required Educational Background : Bachelors or Masters degree in Computer Science, Data Science, AI/ML, or a related field. Relevant Ph.D. or research experience in generative AI is a plus. Experience : 2 - 11 Years of experience in machine learning, with 2+ years in designing and implementing generative AI models or working specifically with transformer-based models. Skills and Experience Required Generative AI : Transformer Models, GANs, VAEs, Text Generation, Image Generation Machine Learning : Algorithms, Deep Learning, Neural Networks Programming : Python, SQL; familiarity with libraries such as Hugging Face Transformers, PyTorch, TensorFlow MLOps : Docker, Kubernetes, MLflow, Cloud Platforms (AWS, GCP, Azure) ? Data Engineering : Data Preprocessing, Feature Engineering, Data Cleaning Why you'll love working with us: BRING YOUR PASSION AND FUN . Corporate culture woven from highly diverse perspectives and insights. BALANCE WORK AND PERSONAL TIME LIKE A BOSS . Resources and flexibility to more easily integrate your work and your life. BECOME A CERTIFIED SMARTY PANTS . Ongoing training and development opportunities for even the most insatiable learner. START-UP SPIRIT (Good ten plus years, yet we maintain it) FLEXIBLE WORKING HOURS
Posted 3 weeks ago
4.0 - 9.0 years
6 - 10 Lacs
Hyderabad
Work from Office
Overview The Sr. Software Engineer will be part of a team of some of the best and brightest in the industry who are focused on full-cycle development of scalable web and responsive applications that touch our growing customer base every day. As part of the Labs team, you will work collaboratively with agile team members to design new system functionality and to research and remedy complex issues as they arise, embodying a passion for continuous improvement and test-driven development. Responsibilities Design, develop, and maintain scalable data pipelines and ETL processes. Collaborate with software engineers, data scientists, and product managers to understand data requirements and provide tailored solutions. Optimize and enhance the performance of our data infrastructure to support analytics and reporting. Implement and maintain data governance and security best practices. Troubleshoot and resolve data-related issues and ensure data quality and integrity. Mentor and guide junior data engineers, fostering a culture of continuous learning and improvement. Qualifications Bachelor’s or master’s degree in computer science, Engineering, or a related field. 3+ years of experience in data engineering or a similar role. Proficiency in SQL and experience with relational databases (e.g., PostgreSQL, MySQL). Strong programming skills in Python Familiarity with cloud platforms (e.g., AWS, GCP, Azure) and containerization (e.g., Docker). Excellent problem-solving skills and attention to detail. Strong communication and collaboration skills.
Posted 3 weeks ago
8.0 - 10.0 years
9 - 13 Lacs
Bengaluru
Work from Office
What you’ll be doing: Assist in developing machine learning models based on project requirements Work with datasets by preprocessing, selecting appropriate data representations, and ensuring data quality. Performing statistical analysis and fine-tuning using test results. Support training and retraining of ML systems as needed. Help build data pipelines for collecting and processing data efficiently. Follow coding and quality standards while developing AI/ML solutions Contribute to frameworks that help operationalize AI models What we seek in you: 8+ years of experience in IT Industry Strong on programming languages like Python One cloud hands-on experience (GCP preferred) Experience working with Dockers Environments managing (e.g venv, pip, poetry, etc.) Experience with orchestrators like Vertex AI pipelines, Airflow, etc Understanding of full ML Cycle end-to-end Data engineering, Feature Engineering techniques Experience with ML modelling and evaluation metrics Experience with Tensorflow, Pytorch or another framework Experience with Models monitoring Advance SQL knowledge Aware of Streaming concepts like Windowing, Late arrival, Triggers etc Storage: CloudSQL, Cloud Storage, Cloud Bigtable, Bigquery, Cloud Spanner, Cloud DataStore, Vector database Ingest: Pub/Sub, Cloud Functions, AppEngine, Kubernetes Engine, Kafka, Micro services Schedule: Cloud Composer, Airflow Processing: Cloud Dataproc, Cloud Dataflow, Apache Spark, Apache Flink CI/CD: Bitbucket+Jenkins / Gitlab, Infrastructure as a tool: Terraform Life at Next: At our core, we're driven by the mission of tailoring growth for our customers by enabling them to transform their aspirations into tangible outcomes. We're dedicated to empowering them to shape their futures and achieve ambitious goals. To fulfil this commitment, we foster a culture defined by agility, innovation, and an unwavering commitment to progress. Our organizational framework is both streamlined and vibrant, characterized by a hands-on leadership style that prioritizes results and fosters growth. Perks of working with us: Clear objectives to ensure alignment with our mission, fostering your meaningful contribution. Abundant opportunities for engagement with customers, product managers, and leadership. You'll be guided by progressive paths while receiving insightful guidance from managers through ongoing feedforward sessions. Cultivate and leverage robust connections within diverse communities of interest. Choose your mentor to navigate your current endeavors and steer your future trajectory. Embrace continuous learning and upskilling opportunities through Nexversity. Enjoy the flexibility to explore various functions, develop new skills, and adapt to emerging technologies. Embrace a hybrid work model promoting work-life balance. Access comprehensive family health insurance coverage, prioritizing the well-being of your loved ones. Embark on accelerated career paths to actualize your professional aspirations. Who we are? We enable high growth enterprises build hyper personalized solutions to transform their vision into reality. With a keen eye for detail, we apply creativity, embrace new technology and harness the power of data and AI to co-create solutions tailored made to meet unique needs for our customers. Join our passionate team and tailor your growth with us!
Posted 3 weeks ago
7.0 - 12.0 years
18 - 30 Lacs
Chennai
Hybrid
Hi, We have vacancy for Sr. Data engineer. We are seeking an experienced Senior Data Engineer to join our dynamic team. The ideal candidate will be responsible for Design and implement the data engineering framework. Responsibilities Strong Skill in Big Query, GCP Cloud Data Fusion (for ETL/ELT) and PowerBI. Need to have strong skill in Data Pipelines Able to work with Power BI and Power BI Reporting Design and implement the data engineering framework and data pipelines using Databricks and Azure Data Factory. Document the high-level design components of the Databricks data pipeline framework. Evaluate and document the current dependencies on the existing DEI toolset and agree a migration plan. Lead on the Design and implementation an MVP Databricks framework. Document and agree an aligned set of standards to support the implementation of a candidate pipeline under the new framework. Support integrating a test automation approach to the Databricks framework in conjunction with the test engineering function to support CI/CD and automated testing. Support the development teams capability building by establishing an L&D and knowledge transition approach. Support the implementation of data pipelines against the new framework in line with the agreed migration plan. Ensure data quality management including profiling, cleansing and deduplication to support build of data products for clients Skill Set Experience working in Azure Cloud using Azure SQL, Azure Databricks, Azure Data Lake, Delta Lake, and Azure DevOps. Proficient in Python, Pyspark and SQL coding skills. Profiling data and data modelling experience on large data transformation projects creating data products and data pipelines. Creating data management frameworks and data pipelines which are metadata and business rules driven using Databricks. Experience of reviewing datasets for data products in terms of data quality management and populating data schemas set by Data Modellers Experience with data profiling, data quality management and data cleansing tools. Immediate joining or short notice is required. Pls Call varsha 7200847046 for more info Thanks, varsha 7200847046
Posted 3 weeks ago
6.0 - 11.0 years
13 - 18 Lacs
Ahmedabad
Work from Office
About the Company e.l.f. Beauty, Inc. stands with every eye, lip, face and paw. Our deep commitment to clean, cruelty free beauty at an incredible value has fueled the success of our flagship brand e.l.f. Cosmetics since 2004 and driven our portfolio expansion. Today, our multi-brand portfolio includes e.l.f. Cosmetics, e.l.f. SKIN, pioneering clean beauty brand Well People, Keys Soulcare, a groundbreaking lifestyle beauty brand created with Alicia Keys and Naturium, high-performance, biocompatible, clinically-effective and accessible skincare. In our Fiscal year 24, we had net sales of $1 Billion and our business performance has been nothing short of extraordinary with 24 consecutive quarters of net sales growth. We are the #2 mass cosmetics brand in the US and are the fastest growing mass cosmetics brand among the top 5. Our total compensation philosophy offers every full-time new hire competitive pay and benefits, bonus eligibility (200% of target over the last four fiscal years), equity, flexible time off, year-round half-day Fridays, and a hybrid 3 day in office, 2 day at home work environment. We believe the combination of our unique culture, total compensation, workplace flexibility and care for the team is unmatched across not just beauty but any industry. Visit our Career Page to learn more about our team: https://www.elfbeauty.com/work-with-us Job Summary: We’re looking for a strategic and technically strong Senior Data Architect to join our high-growth digital team. The selected person will play a critical role in shaping the company’s global data architecture and vision. The ideal candidate will lead enterprise-level architecture initiatives, collaborate with engineering and business teams, and guide a growing team of engineers and QA professionals. This role involves deep engagement across domains including Marketing, Product, Finance, and Supply Chain, with a special focus on marketing technology and commercial analytics relevant to the CPG/FMCG industry. The candidate should bring a hands-on mindset, a proven track record in designing scalable data platforms, and the ability to lead through influence. An understanding of industry-standard frameworks (e.g., TOGAF), tools like CDPs, MMM platforms, and AI-based insights generation will be a strong plus. Curiosity, communication, and architectural leadership are essential to succeed in this role. Key Responsibilities Enterprise Data Strategy: Design, define and maintain a holistic data strategy & roadmap that aligns with corporate objectives and fuels digital transformation. Ensure data architecture and products aligns with enterprise standards and best practices. Data Governance & Quality: Establish scalable governance frameworks to ensure data accuracy, privacy, security, and compliance (e.g., GDPR, CCPA). Oversee quality, security and compliance initiatives Data Architecture & Platforms: Oversee modern data infrastructure (e.g., data lakes, warehouses, streaming) with technologies like Snowflake, Databricks, AWS, and Kafka. Marketing Technology Integration: Ensure data architecture supports marketing technologies and commercial analytics platforms (e.g., CDP, MMM, ProfitSphere) tailored to the CPG/FMCG industry. Architectural Leadership: Act as a hands-on architect with the ability to lead through influence. Guide design decisions aligned with industry best practices and e.l.f.'s evolving architecture roadmap. Cross-Functional Collaboration: Partner with Marketing, Supply Chain, Finance, R&D, and IT to embed data-driven practices and deliver business impact. Lead integration of data from multiple sources to unified data warehouse. Cloud Optimization : Optimize data flows, storage for performance and scalability. Lead data migration priorities, manage metadata repositories and data dictionaries. Optimise databases and pipelines for efficiency. Manage and track quality, cataloging and observability AI/ML Enablement: Drive initiatives to operationalize predictive analytics, personalization, demand forecasting, and more using AI/ML models. Evaluate emerging data technologies and tools to improve data architecture. Team Leadership: Lead, mentor, and enable high-performing team of data engineers, analysts, and partners through influence and thought leadership. Vendor & Tooling Strategy: Manage relationships with external partners and drive evaluations of data and analytics tools. Executive Reporting: Provide regular updates and strategic recommendations to executive leadership and key stakeholders. Data Enablement : Design data models, database structures, and data integration solutions to support large volumes of data. Qualifications and Requirements Bachelor's or Master's degree in Computer Science, Information Systems, or a related field 18+ years of experience in Information Technology 8+ years of experience in data architecture, data engineering, or a related field, with a focus on large-scale, distributed systems. Strong understanding of data use cases in the CPG/FMCG sector. Experience with tools such as MMM (Marketing Mix Modeling), CDPs, ProfitSphere, or inventory analytics preferred. Awareness of architecture frameworks like TOGAF. Certifications are not mandatory, but candidates must demonstrate clear thinking and experience in applying architecture principles. Must possess excellent communication skills and a proven ability to work cross-functionally across global teams. Should be capable of leading with influence, not just execution. Knowledge of data warehousing, ETL/ELT processes, and data modeling Deep understanding of data modeling principles, including schema design and dimensional data modeling. Strong SQL development experience including SQL Queries and stored procedures Ability to architect and develop scalable data solutions, staying ahead of industry trends and integrating best practices in data engineering. Familiarity with data security and governance best practices Experience with cloud computing platforms such as Snowflake, AWS, Azure, or GCP Excellent problem-solving abilities with a focus on data analysis and interpretation. Strong communication and collaboration skills. Ability to translate complex technical concepts into actionable business strategies. Proficiency in one or more programming languages such as Python, Java, or Scala This job description is intended to describe the general nature and level of work being performed in this position. It also reflects the general details considered necessary to describe the principal functions of the job identified, and shall not be considered, as detailed description of all the work required inherent in the job. It is not an exhaustive list of responsibilities, and it is subject to changes and exceptions at the supervisors’ discretion. e.l.f. Beauty respects your privacy. Please see our Job Applicant Privacy Notice (www.elfbeauty.com/us-job-applicant-privacy-notice) for how your personal information is used and shared.
Posted 3 weeks ago
7.0 - 12.0 years
25 - 40 Lacs
Pune
Work from Office
Experience as a Data Analyst with GCP & Hadoop is mandatory. Work From Office
Posted 3 weeks ago
5.0 - 7.0 years
7 - 9 Lacs
Hyderabad, Ahmedabad, Gurugram
Work from Office
Requirements:What Youll Do:Analyse Business Requirements: Understand and evaluate client needs. Data Model Analysis & GAP Analysis: Review the current data model and identify gaps relative to business requirements. Design BI Schema: Develop Data Warehouse/Business Intelligence schemas. Data Transformation: Utilize Power BI, Tableau, SQL, or ETL tools to transform data. Create Reports & Dashboards: Develop interactive reports and dashboards with calculated formulas. SQL Expertise: Write complex SQL queries and stored procedures. Design BI Solutions: Architect effective business intelligence solutions tailored to business needs.Team Management: Lead and guide a team of BI developers.Data Integration: Integrate data from multiple sources into BI tools for comprehensive analysis. Performance Optimization: Ensure reports and dashboards run efficiently. Stakeholder Collaboration: Work with stakeholders to align BI projects with business goals. Mandatory Knowledge: In-depth knowledge of Data Warehousing; Data Engineering is a plus.What Youll Bring:Educational Qualification: B.Tech in Computer Science or equivalent. Experience: Minimum 7+ years of relevant experience.Share your resume with details on current CTC, expected CTC, and preferred location. Location - Hyderabad, Ahmedabad, Gurgaon, Indore (India)
Posted 3 weeks ago
3.0 - 8.0 years
5 - 10 Lacs
Gurugram
Work from Office
Data Engineer to be a key player in building a robust data infrastructure and flows that supports our advanced forecasting models. Your initial focus will be to create a robust data factory to ensure smooth collection and refresh of actual data, a critical component that feeds our forecast. Additionally, you will be assisting in developing mathematical models and supporting the work of ML engineers and data scientists. Your work will significantly impact our ability to deliver timely and insightful forecasts to our clients. Whats in it for you: Opportunity to build foundational data infrastructure that directly impacts advanced forecasting models and client delivery. Gain exposure to and support the development of sophisticated mathematical models, Machine Learning, and Data Science applications. Contribute significantly to delivering timely and insightful forecasts, influencing client decisions in the automotive sector. Work in a collaborative environment that fosters continuous learning, mentorship, and professional growth in data engineering and related analytical fields. Responsibilities: Data Pipeline Development: Design, build, and maintain scalable and reliable data pipelines for efficient data ingestion, processing, and storage, primarily focusing on creating a data factory for our core forecasting data. Data Quality and Integrity: Implement robust data quality checks and validation processes to ensure the accuracy and consistency of data used in our forecasting models. Mathematical Model Support: Collaborate with other data engineers to develop and refine mathematical logics and models that underpin our forecasting methodologies. ML and Data Science Support : Provide data support to our Machine Learning Engineers and Data Scientists. Collaboration and Communication: Work closely with analysts, developers, and other stakeholders to understand data requirements and deliver effective solutions. Innovation and Improvement: Continuously explore and evaluate new technologies and methodologies to enhance our data infrastructure and forecasting capabilities. What Were Looking For: Bachelor's or Master's degree in Computer Science, Data Engineering, or a related field. Minimum of 3 years of experience in data engineering, with a proven track record of building and maintaining data pipelines. Strong proficiency in SQL and experience with relational and non-relational databases. Strong Python programming skills, with experience in data manipulation and processing libraries (e.g., Pandas, NumPy). Experience with mathematical modelling and supporting ML and data science teams. Experience with cloud platforms (e.g., AWS, Azure, GCP) and cloud-based data services. Strong communication and collaboration skills, with the ability to work effectively in a team environment. Experience in the automotive sector is a plus.
Posted 3 weeks ago
12.0 - 16.0 years
45 - 50 Lacs
Mumbai, Maharastra
Work from Office
Associate Director, Data Engineering (J2EE/Angular/React Full Stack Individual Contributor) Responsibilities: Architect, design, and implement innovative software solutions to enhance S&P Ratings' cloud-based analytics platform. Mentor a team of engineers (as required), fostering a culture of trust, continuous growth, and collaborative problem-solving. Collaborate with business partners to understand requirements, ensuring technical solutions align with business goals. Manage and improve existing software solutions, ensuring high performance and scalability. Participate actively in all Agile scrum ceremonies, contributing to the continuous improvement of team processes. Produce comprehensive technical design documents and conduct technical walkthroughs. Experience & Qualifications: Bachelors degree in computer science, Information Systems, Engineering, equivalent or more is required Proficient with software development lifecycle (SDLC) methodologies like Agile, Test-driven development Total 12+ years of experience with 8+ years designing enterprise products, modern data stacks and analytics platforms 6+ years of hands-on experience contributing to application architecture & designs, proven software/enterprise integration design patterns and full-stack knowledge including modern distributed front end and back-end technology stacks 5+ years full stack development experience in modern web development technologies, Java/J2EE, UI frameworks like Angular, React, SQL, Oracle, NoSQL Databases like MongoDB Exp. with Delta Lake systems like Databricks using AWS cloud technologies and PySpark is a plus Experience designing transactional/data warehouse/data lake and data integrations with Big data eco system leveraging AWS cloud technologies Thorough understanding of distributed computing Passionate, smart, and articulate developer Quality first mindset with a strong background and experience with developing products for a global audience at scale Excellent analytical thinking, interpersonal, oral and written communication skills with strong ability to influence both IT and business partners Superior knowledge of system architecture, object-oriented design, and design patterns. Good work ethic, self-starter, and results-oriented Excellent communication skills are essential, with strong verbal and writing proficiencies Additional Preferred Qualifications: Experience working AWS Experience with SAFe Agile Framework Bachelor's/PG degree in Computer Science, Information Systems or equivalent. Hands-on experience contributing to application architecture & designs, proven software/enterprise integration design principles Ability to prioritize and manage work to critical project timelines in a fast-paced environment Excellent Analytical and communication skills are essential, with strong verbal and writing proficiencies Ability to train and mentor
Posted 3 weeks ago
10.0 - 15.0 years
25 - 40 Lacs
Mumbai
Work from Office
Overview of the Company: Jio Platforms Ltd. is a revolutionary Indian multinational tech company, often referred to as India's biggest startup, headquartered in Mumbai. Launched in 2019, it's the powerhouse behind Jio, India's largest mobile network with over 400 million users. But Jio Platforms is more than just telecom. It's a comprehensive digital ecosystem, developing cutting-edge solutions across media, entertainment, and enterprise services through popular brands like JioMart, JioFiber, and JioSaavn. Join us at Jio Platforms and be part of a fast-paced, dynamic environment at the forefront of India's digital transformation. Collaborate with brilliant minds to develop next-gen solutions that empower millions and revolutionize industries. Team Overview: The Data Platforms Team is the launchpad for a data-driven future, empowering the Reliance Group of Companies. We're a passionate group of experts architecting an enterprise-scale data mesh to unlock the power of big data, generative AI, and ML modelling across various domains. We don't just manage data we transform it into intelligent actions that fuel strategic decision-making. Imagine crafting a platform that automates data flow, fuels intelligent insights, and empowers the organization that's what we do. Join our collaborative and innovative team, and be a part of shaping the future of data for India's biggest digital revolution! About the role. Title : Lead Data Engineer Location: Mumbai Responsibilities: End-to-End Data Pipeline Development: Design, build, optimize, and maintain robust data pipelines across cloud, on-premises, or hybrid environments, ensuring performance, scalability, and seamless data flow. Reusable Components & Frameworks: Develop reusable data pipeline components and contribute to the team's data pipeline framework evolution. Data Architecture & Solutions: Contribute to data architecture design, applying data modelling, storage, and retrieval expertise. Data Governance & Automation: Champion data integrity, security, and efficiency through metadata management, automation, and data governance best practices. Collaborative Problem Solving: Partner with stakeholders, data teams, and engineers to define requirements, troubleshoot, optimize, and deliver data-driven insights. Mentorship & Knowledge Transfer: Guide and mentor junior data engineers, fostering knowledge sharing and professional growth. Qualification Details: Education: Bachelor's degree or higher in Computer Science, Data Science, Engineering, or a related technical field. Core Programming: Excellent command of a primary data engineering language (Scala, Python, or Java) with a strong foundation in OOPS and functional programming concepts. Big Data Technologies: Hands-on experience with data processing frameworks (e.g., Hadoop, Spark, Apache Hive, NiFi, Ozone, Kudu), ideally including streaming technologies (Kafka, Spark Streaming, Flink, etc.). Database Expertise: Excellent querying skills (SQL) and strong understanding of relational databases (e.g., MySQL, PostgreSQL). Experience with NoSQL databases (e.g., MongoDB, Cassandra) is a plus. End-to-End Pipelines: Demonstrated experience in implementing, optimizing, and maintaining complete data pipelines, integrating varied sources and sinks including streaming real-time data. Cloud Expertise: Knowledge of Cloud Technologies like Azure HDInsights, Synapse, EventHub and GCP DataProc, Dataflow, BigQuery. CI/CD Expertise: Experience with CI/CD methodologies and tools, including strong Linux and shell scripting skills for automation. Desired Skills & Attributes: Problem-Solving & Troubleshooting: Proven ability to analyze and solve complex data problems, troubleshoot data pipeline issues effectively. Communication & Collaboration: Excellent communication skills, both written and verbal, with the ability to collaborate across teams (data scientists, engineers, stakeholders). Continuous Learning & Adaptability: A demonstrated passion for staying up-to-date with emerging data technologies and a willingness to adapt to new tools.
Posted 3 weeks ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
20312 Jobs | Dublin
Wipro
11977 Jobs | Bengaluru
EY
8165 Jobs | London
Accenture in India
6667 Jobs | Dublin 2
Uplers
6464 Jobs | Ahmedabad
Amazon
6352 Jobs | Seattle,WA
Oracle
5993 Jobs | Redwood City
IBM
5803 Jobs | Armonk
Capgemini
3897 Jobs | Paris,France
Tata Consultancy Services
3776 Jobs | Thane