Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Job Location HYDERABAD OFFICE INDIA Job Description Key Responsibilities: Productionize pipelines for large, complex data sets which meet technical and business requirements. Partner with data asset managers, architects, and development leads to ensure a sound technical solution. Follow, and contribute to, coding standards and best practices to ensure pipelines and components are efficient, robust, cost effective and reusable. Identify, design, and implement internal process improvements. Optimize Spark jobs for performance and cost. Tune configurations, minimize shuffles, and leverage advanced techniques like broadcast joins, caching, and partitioning. Ensure data quality, reliability, and performance by implementing best practices for data validation, monitoring, and optimization. Monitor and troubleshoot data pipelines and workflows to ensure seamless operation. Stay updated on the latest Databricks features, tools, and industry trends to continuously improve data engineering practices. Strong understanding of distributed computing concepts and big data processing. Excellent problem-solving skills and the ability to work collaboratively in a team environment. Job Qualifications Strong skills with Python, SQL, Delta Lake, Databricks, Spark/Pyspark, Github and Azure. You will be expected to Attain and/or maintain technical certifications related to the role (Databricks, Azure) Ability to use and implement CI/CD and associated tools such as Github Actions, SonarQube, Snyk Familiarity or experience in one or more modern application development framework methods and tools (e.g. Disciplined Agile, Scrum). Familiarity or experience with a range of data engineering best practices for development including query optimization, version control, code reviews, and documentation The ability to build relationships and work in diverse, multidisciplinary teams Excellent communication skills with business intuition and ability to understand business systems, versatility, and willingness to learn new technologies on the job About Us We produce globally recognized brands and we grow the best business leaders in the industry. With a portfolio of trusted brands as diverse as ours, it is paramount our leaders are able to lead with courage the vast array of brands, categories and functions. We serve consumers around the world with one of the strongest portfolios of trusted, quality, leadership brands, including Always®, Ariel®, Gillette®, Head & Shoulders®, Herbal Essences®, Oral-B®, Pampers®, Pantene®, Tampax® and more. Our community includes operations in approximately 70 countries worldwide. Visit http://www.pg.com to know more. We are an equal opportunity employer and value diversity at our company. We do not discriminate against individuals on the basis of race, color, gender, age, national origin, religion, sexual orientation, gender identity or expression, marital status, citizenship, disability, HIV/AIDS status, or any other legally protected factor. "At P&G, the hiring journey is personalized every step of the way, thereby ensuring equal opportunities for all, with a strong foundation of Ethics & Corporate Responsibility guiding everything we do. All the available job opportunities are posted either on our website - pgcareers.com, or on our official social media pages, for the convenience of prospective candidates, and do not require them to pay any kind of fees towards their application.” Job Schedule Full time Job Number R000135017 Job Segmentation Experienced Professionals (Job Segmentation)
Posted 6 days ago
9.0 years
20 - 35 Lacs
Pune, Maharashtra, India
Remote
Job Title: Senior Analyst – Data Analytics Location: Remote Experience: 9+ Years Employment Type: Full-Time No "Data engineers" apply Job Summary We are looking for a detail-oriented and proactive Senior Analyst – Data Analytics to join our team in Mumbai. This non-engineering role focuses on leveraging Snowflake, Databricks, and Power BI to deliver actionable insights, create dashboards, and support data-driven decision-making across the business. Key Responsibilities Analyze and interpret data using Snowflake and Databricks to provide strategic insights Design and develop impactful dashboards and visualizations using Power BI Collaborate with stakeholders to understand business requirements and translate them into analytical solutions Identify trends, patterns, and opportunities for business improvements Ensure accuracy, integrity, and consistency of data used for reporting and analytics Deliver clear and concise reports to business leaders and teams Required Skills 9+ years of hands-on experience in data analytics/business intelligence Proficient in Snowflake and Databricks (using SQL and/or Python) Strong expertise in Power BI – report building, DAX functions, and dashboard design Solid understanding of data modeling, KPIs, and data storytelling Strong SQL skills Excellent communication and analytical thinking skills Ability to manage multiple tasks and work cross-functionally Skills: data,analytical thinking,data storytelling,data modeling,data analytics,communication,business intelligence,databricks,power bi,snowflake,kpi development,python,dax functions,sql
Posted 6 days ago
10.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Role:-Business Analyst-Insurance domain Exp:-10-15 Yrs Location:-Hyderabad Required Skills : Business Analyst- BRD/FRD, Stakeholder Mngt, UAT Testing, Datawarehouse Concepts, SQL joints and subqueries, Data Visualization tools-Power BI/MSTR and Insurance domain (Life insurance and Annuities) Please share your resumes to jyothsna.g@technogenindia.com, Experience: 10+ years of experience as a BSA or similar role in data analytics or technology projects. 5+ years of domain experience in asset management, investment management, insurance, or financial services. Familiarity with Investment Operations concepts such as Critical Data Elements (CDEs), data traps, and reconciliation workflows. Working knowledge of data engineering principles: ETL/ELT, data lakes, and data warehousing. Proficiency in BI and analytics tools such as Power BI, Tableau, MicroStrategy, and SQL. Excellent communication, analytical thinking, and stakeholder engagement skills. Experience working in Agile/Scrum environments with cross-functional delivery teams. Technical Skills: Proven track record of Analytical and Problem-Solving skills. In-depth knowledge of investment data platforms, including Golden Source, NeoXam, RIMES, JPM Fusion, etc. Expertise in cloud data technologies such as Snowflake, Databricks, and AWS/GCP/Azure data services. Strong understanding of data governance frameworks, metadata management, and data lineage. Familiarity with regulatory requirements and compliance standards in the investment management industry. Hands-on experience with IBOR’s such as Blackrock Alladin, CRD, Eagle STAR (ABOR), Eagle Pace, and Eagle DataMart. Familiarity with investment data platforms such as Golden Source, FINBOURNE, NeoXam, RIMES, and JPM Fusion. Experience with cloud data platforms like Snowflake and Databricks. Background in data governance, metadata management, and data lineage frameworks.
Posted 6 days ago
4.0 years
0 Lacs
Kochi, Kerala, India
On-site
Introduction In this role, you'll work in one of our IBM Consulting Client Innovation Centers (Delivery Centers), where we deliver deep technical and industry expertise to a wide range of public and private sector clients around the world. Our delivery centers offer our clients locally based skills and technical expertise to drive innovation and adoption of new technology. Your Role And Responsibilities Graduate degree in Computer Science, Statistics, Informatics, Information Systems or another quantitative field. 7+ Yrs total experience in Data Engineering projects & 4+ years of relevant experience on Azure technology services and Python Azure : Azure data factory, ADLS- Azure data lake store, Azure data bricks, Mandatory Programming languages : Py-Spark, PL/SQL, Spark SQL Database : SQL DB Experience with Azure: ADLS, Databricks, Stream Analytics, SQL DW, COSMOS DB, Analysis Services, Azure Functions, Serverless Architecture, ARM Templates Experience with relational SQL and NoSQL databases, including Postgres and Cassandra. Experience with object-oriented/object function scripting languages: Python, SQL, Scala, Spark-SQL etc. Data Warehousing experience with strong domain Preferred Education Master's Degree Required Technical And Professional Expertise Intuitive individual with an ability to manage change and proven time management Proven interpersonal skills while contributing to team effort by accomplishing related results as needed Up-to-date technical knowledge by attending educational workshops, reviewing publications Preferred Technical And Professional Experience Experience with Azure: ADLS, Databricks, Stream Analytics, SQL DW, COSMOS DB, Analysis Services, Azure Functions, Serverless Architecture, ARM Templates Experience with relational SQL and NoSQL databases, including Postgres and Cassandra. Experience with object-oriented/object function scripting languages: Python, SQL, Scala, Spark-SQL etc
Posted 6 days ago
3.0 years
8 - 14 Lacs
Mumbai Metropolitan Region
Remote
Job Title: Senior Analyst – Data Analytics Location: Mumbai/Pune Experience: 3+ Years Employment Type: Full-Time IMPORTANT: Remote but you have to travel to the client location once in a month. "No Data engineers apply here" Job Summary We are looking for a detail-oriented and proactive Senior Analyst – Data Analytics to join our team in Mumbai. This non-engineering role focuses on leveraging Snowflake, Databricks, and Power BI to deliver actionable insights, create dashboards, and support data-driven decision-making across the business. Key Responsibilities Analyze and interpret data using Snowflake and Databricks to provide strategic insights Design and develop impactful dashboards and visualizations using Power BI Collaborate with stakeholders to understand business requirements and translate them into analytical solutions Identify trends, patterns, and opportunities for business improvements Ensure accuracy, integrity, and consistency of data used for reporting and analytics Deliver clear and concise reports to business leaders and teams Required Skills 3+ years of hands-on experience in data analytics/business intelligence Proficient in Snowflake and Databricks (using SQL and/or Python) Strong expertise in Power BI – report building, DAX functions, and dashboard design Solid understanding of data modeling, KPIs, and data storytelling Strong SQL skills Excellent communication and analytical thinking skills Ability to manage multiple tasks and work cross-functionally Skills: data modeling,analytical thinking,databricks,data analytics,sql,dax functions,kpis,power bi,python,communication,data storytelling,snowflake
Posted 6 days ago
3.0 years
8 - 14 Lacs
Pune, Maharashtra, India
Remote
Job Title: Senior Analyst – Data Analytics Location: Mumbai/Pune Experience: 3+ Years Employment Type: Full-Time IMPORTANT: Remote but you have to travel to the client location once in a month. "No Data engineers apply here" Job Summary We are looking for a detail-oriented and proactive Senior Analyst – Data Analytics to join our team in Mumbai. This non-engineering role focuses on leveraging Snowflake, Databricks, and Power BI to deliver actionable insights, create dashboards, and support data-driven decision-making across the business. Key Responsibilities Analyze and interpret data using Snowflake and Databricks to provide strategic insights Design and develop impactful dashboards and visualizations using Power BI Collaborate with stakeholders to understand business requirements and translate them into analytical solutions Identify trends, patterns, and opportunities for business improvements Ensure accuracy, integrity, and consistency of data used for reporting and analytics Deliver clear and concise reports to business leaders and teams Required Skills 3+ years of hands-on experience in data analytics/business intelligence Proficient in Snowflake and Databricks (using SQL and/or Python) Strong expertise in Power BI – report building, DAX functions, and dashboard design Solid understanding of data modeling, KPIs, and data storytelling Strong SQL skills Excellent communication and analytical thinking skills Ability to manage multiple tasks and work cross-functionally Skills: data modeling,analytical thinking,databricks,data analytics,sql,dax functions,kpis,power bi,python,communication,data storytelling,snowflake
Posted 6 days ago
3.0 years
0 Lacs
Pune/Pimpri-Chinchwad Area
On-site
Company Description NIQ is the world’s leading consumer intelligence company, delivering the most complete understanding of consumer buying behavior and revealing new pathways to growth. In 2023, NIQ combined with GfK, bringing together the two industry leaders with unparalleled global reach. With a holistic retail read and the most comprehensive consumer insights—delivered with advanced analytics through state-of-the-art platforms—NIQ delivers the Full View™. NIQ is an Advent International portfolio company with operations in 100+ markets, covering more than 90% of the world’s population. Job Description Write complex algorithms to get an optimal solution for real time problems Qualitative analysis and data mining to extract data, discover hidden patterns, and develop predictive models based on findings Developing processes to extract, transform and load data Use distributed computing to validate and process large volumes of data to deliver insights Evaluate technologies we can leverage, including open-source frameworks, libraries, and tools Interface with product and other engineering teams on a regular cadence Qualifications 3+ years of applicable data engineering experience, including Python & RESTful APIs In-depth understanding of the Python software development stacks, ecosystems, frameworks and tools such as Numpy, Scipy, Pandas, Dask, spaCy, NLTK, sci-kit-learn and PyTorch Strong fundamentals in data mining & data processing methodologies Strong knowledge of data structures, algorithms and designing for performance, scalability and availability Sound understanding of Big Data & RDBMS technologies, such as SQL, Hive, Spark, Databricks, Snowflake or Postgresql Orchestration and messaging frameworks: Airflow Good experience working with Azure cloud platform Good experience working in containerization framework, Docker is a plus. Experience in agile software development practices and DevOps is a plus Knowledge of and Experience with Kubernetes is a plus Excellent English communication skills, with the ability to effectively interface across cross-functional technology teams and the business Minimum B.E. degree in Computer Science, Computer Engineering or related field Additional Information Enjoy a flexible and rewarding work environment with peer-to-peer recognition platforms Recharge and revitalize with help of wellness plans made for you and your family Plan your future with financial wellness tools Stay relevant and upskill yourself with career development opportunities Our Benefits Flexible working environment Volunteer time off LinkedIn Learning Employee-Assistance-Program (EAP) About NIQ NIQ is the world’s leading consumer intelligence company, delivering the most complete understanding of consumer buying behavior and revealing new pathways to growth. In 2023, NIQ combined with GfK, bringing together the two industry leaders with unparalleled global reach. With a holistic retail read and the most comprehensive consumer insights—delivered with advanced analytics through state-of-the-art platforms—NIQ delivers the Full View™. NIQ is an Advent International portfolio company with operations in 100+ markets, covering more than 90% of the world’s population. For more information, visit NIQ.com Want to keep up with our latest updates? Follow us on: LinkedIn | Instagram | Twitter | Facebook Our commitment to Diversity, Equity, and Inclusion NIQ is committed to reflecting the diversity of the clients, communities, and markets we measure within our own workforce. We exist to count everyone and are on a mission to systematically embed inclusion and diversity into all aspects of our workforce, measurement, and products. We enthusiastically invite candidates who share that mission to join us. We are proud to be an Equal Opportunity/Affirmative Action-Employer, making decisions without regard to race, color, religion, gender, gender identity or expression, sexual orientation, national origin, genetics, disability status, age, marital status, protected veteran status or any other protected class. Our global non-discrimination policy covers these protected classes in every market in which we do business worldwide. Learn more about how we are driving diversity and inclusion in everything we do by visiting the NIQ News Center: https://nielseniq.com/global/en/news-center/diversity-inclusion
Posted 6 days ago
3.0 years
0 Lacs
Vadodara, Gujarat, India
On-site
Job Description Job Description Join a dynamic and diverse global team dedicated to developing innovative solutions that uncover the complete consumer journey for our clients. We are seeking a highly skilled Data Scientist with strong development skills in programming languages such as Python. Additionally, expertise in statistics, mathematics, econometrics, and experience with panel data to revolutionize the way we measure consumer behavior both online and in-store. Looking ahead, we are excited to find someone who will join our team in developing a tool that can simulate the impact of production process changes on client data. This tool outside of the production factory will allow the wider Data Science team to drive innovation with unpresented efficiency. About The Role Collaborative Environment: Work with an international team in a flexible and supportive setting, fostering cross-functional collaboration between data scientists, engineers, and product stakeholders Tool Ownership and Development: Take ownership of a core Python-based tool, ensuring its continued development, scalability, and maintainability. Use robust engineering practices such as version control, testing and PRs Innovative Solution Development: Collaborate closely with subject matter experts to understand complex methodologies. Translate these into scalable, production-ready implementations within the Python tool. Design and implement new features and enhancements to the tool to address evolving market challenges and improve team efficiency Methodology Enhancement: Evaluate and improve current methodologies, including data cleaning, preparation, quality tracking, and consumer projection, with a strong focus on automation and reproducibility Documentation & Code Quality: Maintain comprehensive documentation of the tool’s architecture, usage, and development roadmap. Ensure high code quality through peer reviews and adherence to best practices Research and Analysis: Conduct rigorous research and analysis to inform tool improvements and ensure alignment with business needs. Communicate findings and recommendations clearly to both technical and non-technical audiences Deployment and Support: Support the production deployment of new features and enhancements. Monitor tool performance and address issues proactively to ensure reliability and user satisfaction Cross-Team Coordination: Coordinate efforts across multiple teams and stakeholders to ensure seamless integration of the tool into broader workflows and systems Qualifications About You Ideally you possess a good understanding of consumer behavior, panel-based projections, and consumer metrics and analytics. You have successfully designed and developed software applying statistical and data analytical methods and demonstrated your ability to handle complex data sets. Experience with (un)managed crowdsourced panels and receipt capture methodologies is an advantage. Educational Background: Bachelor’s or Master’s Degree in Computer Science, Software Engineering, Mathematics, Statistics, Socioeconomics, Data Science, or a related field with a minimum of 3 years of relevant experience Programming Proficiency: Proficient with Python or another programming language, R, C++ or JAVA, with a willingness to learn Python Software Engineering Skills: Strong software engineering skills, including experience designing and developing software; optionally, experience with version control systems GitHub or Bitbucket Data Analysis Skills: Proficiency in manipulating, analyzing, and interpreting large data sets Data Handling: Experience using Spark, specifically with PySpark package, experience working with large-scale datasets. Optionally, experience in SQL and working with queries Continuous Learning: Eagerness to adopt and develop evolving technologies and tools Statistical Expertise: Statistical and logical skills, experience in data cleaning, and data aggregation techniques Communication and Collaboration: Strong communication, writing, and collaboration skills Nice to Have Consumer Insights: Knowledge of consumer behavior and (un)managed consumer-related crowdsourced panels Technology Skills: Familiarity with technology stacks for cloud computing (AzureAI, , Databricks, Snowflake) Production Support:Experience or interest in supporting technology teams in production deployment Additional Information Our Benefits Flexible working environment Volunteer time off LinkedIn Learning Employee-Assistance-Program (EAP) About NIQ NIQ is the world’s leading consumer intelligence company, delivering the most complete understanding of consumer buying behavior and revealing new pathways to growth. In 2023, NIQ combined with GfK, bringing together the two industry leaders with unparalleled global reach. With a holistic retail read and the most comprehensive consumer insights—delivered with advanced analytics through state-of-the-art platforms—NIQ delivers the Full View™. NIQ is an Advent International portfolio company with operations in 100+ markets, covering more than 90% of the world’s population. For more information, visit NIQ.com Want to keep up with our latest updates? Follow us on: LinkedIn | Instagram | Twitter | Facebook Our commitment to Diversity, Equity, and Inclusion NIQ is committed to reflecting the diversity of the clients, communities, and markets we measure within our own workforce. We exist to count everyone and are on a mission to systematically embed inclusion and diversity into all aspects of our workforce, measurement, and products. We enthusiastically invite candidates who share that mission to join us. We are proud to be an Equal Opportunity/Affirmative Action-Employer, making decisions without regard to race, color, religion, gender, gender identity or expression, sexual orientation, national origin, genetics, disability status, age, marital status, protected veteran status or any other protected class. Our global non-discrimination policy covers these protected classes in every market in which we do business worldwide. Learn more about how we are driving diversity and inclusion in everything we do by visiting the NIQ News Center: https://nielseniq.com/global/en/news-center/diversity-inclusion
Posted 6 days ago
2.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
ISP Data Science - Analyst Role Profile Location: Bangalore, India Purpose of Role We are seeking a highly skilled and data-driven Data Science - Analyst to join our team. The ideal candidate will leverage advanced data analytics and AI techniques along with business heuristics to analyse student enrolment and retention data, identify trends, and provide actionable insights to support ISP and its schools’ enrolment goals. This role is critical for improving student experiences, optimising resource allocation, and enhancing overall enrolment and retention performance. The successful candidate will bring strong expertise in Python or equivalent-based statistical modelling (including propensity modelling), experience with Azure Databricks for scalable data workflows, and advanced skills in Power BI to build high-impact visualisations and dashboards. The role requires both technical depth and the ability to translate complex insights into strategic recommendations. ISP Principles Begin with our children and students. Our children and students are at the heart of what we do. Simply, their success is our success. Wellbeing and safety are both essential for learners and learning. Therefore, we are consistent in identifying potential safeguarding and Health & Safety issues and acting and following up on all concerns appropriately. Treat everyone with care and respect. We look after one another, embrace similarities and differences and promote the well-being of self and others. Operate effectively. We focus relentlessly on the things that are most important and will make the most difference. We apply school policies and procedures and embody the shared ideas of our community. Are financially responsible. We make financial choices carefully based on the needs of the children, students and our schools. Learn continuously. Getting better is what drives us. We positively engage with personal and professional development and school improvement. ISP Data Science - Analyst Key Responsibilities Data Analysis: Collect, clean, and preprocess, enrolment, retention, and customer satisfaction data from multiple sources. Analyse data to uncover trends, patterns, and factors influencing enrolment, retention, and customer satisfaction. AI and Machine Learning Implementation: Expertise in developing and deploying propensity models to support customer acquisition and retention activities and strategy. Experience with Azure, Databricks (and other equivalent platforms) for scalable data engineering and machine learning workflows. Develop and implement AI models, such as predictive analytics and propensity models to forecast enrolment patterns and retention risks. Use machine learning algorithms to identify high-risk student populations and recommend intervention strategies. Support lead scoring model development on HubSpot CRM. Collaborate with key colleagues to understand and define the most impactful use cases for AI and Machine Learning. Analyse cost/benefit of deploying systems and provide recommendations. Reporting and Visualisation: Create relevant dashboards on MS Power BI, reports, and visualisations to communicate key insights to stakeholders. Present findings in a clear and actionable manner to support decision-making. Collaboration: Work closely with key Group and Regional colleagues to understand challenges and opportunities related to enrolment and retention. Partner with IT and data teams to ensure data integrity and accessibility. Continuous Improvement: Monitor the performance of AI models and analytics tools, making necessary adjustments to improve accuracy and relevance. Stay updated with the latest advancements in AI, data analytics, and education trends. Skills, Qualifications And Experience Education: Bachelor’s degree in Data Science, Computer Science, Statistics, or a related field (Master’s preferred). Experience: At least 2 years’ experience in data analytics, preferably in education or a related field Experience in implementing predictive models - propensity models and interpreting their results. Strong Python skills for statistical modelling, including logistic regression, clustering, and decision trees. Hands-on experience with Azure Databricks is highly preferred. Strong working knowledge of Power BI for building automated and interactive dashboards. Hands-on experience with AI/ML tools and frameworks and currently employed in an AI/ML role. Proficiency in SQL, Python, R, or other data analytics languages. Skills and preferred attributes: Strong understanding of statistical methods and predictive analytics. Proficiency in data visualization tools (e.g., Tableau, Power BI, or similar). Excellent problem-solving, critical thinking, and communication skills. Ability to work collaboratively with diverse teams. Experience in education technology or student success initiatives. Familiarity with CRM or student information systems. Knowledge of ethical considerations in AI and data privacy laws. ISP Commitment to Safeguarding Principles ISP is committed to safeguarding and promoting the welfare of children and young people and expects all staff and volunteers to share this commitment. All post holders are subject to appropriate vetting procedures, including an online due diligence search, references and satisfactory Criminal Background Checks or equivalent covering the previous 10 years’ employment history. ISP Commitment to Diversity, Equity, Inclusion, and Belonging ISP is committed to strengthening our inclusive culture by identifying, hiring, developing, and retaining high-performing teammates regardless of gender, ethnicity, sexual orientation and gender expression, age, disability status, neurodivergence, socio-economic background or other demographic characteristics. Candidates who share our vision and principles and are interested in contributing to the success of ISP through this role are strongly encouraged to apply.
Posted 6 days ago
3.0 years
0 Lacs
India
Remote
About us: At Xenon7, we work with leading enterprises and innovative startups on exciting, cutting-edge projects that leverage the latest technologies across various domains of IT including Data, Web, Infrastructure, AI, and many others. Our expertise in IT solutions development and on-demand resources allows us to partner with clients on transformative initiatives, driving innovation and business growth. Whether it's empowering global organizations or collaborating with trailblazing startups, we are committed to delivering advanced, impactful solutions that meet today's most complex challenges. About the Client: Join one of the Fortune 500 leaders in the pharmaceutical industry that is looking innovate and expand our technological capabilities. The person would join a product team working on their self-service company-wide platform that enables all the teams and business units to deploy their AI solutions, making them accessible across the entire company. Position Summary: As an MLOps Engineer, you will play a crucial role in the product team. You will focus on administering and optimizing Databricks within AWS environments, expanding features and capabilities on Databricks, assessing new releases of Databricks features, implementing them to the platform, and generally supporting the business teams with their requests for the platform. Key Responsibilities: Databricks Administration: Manage and optimize Databricks environments, ensuring high availability, performance, and security DevOps Engineering: Implement and maintain Databricks on serverless architectures, ensuring seamless CI/CD pipelines and robust integration with AWS services MLOps Implementation: Develop and enforce best practices for machine learning lifecycle management using Databricks. Collaborate with data scientists and developers to automate and streamline our AI model development AWS and Azure Integration: Leverage a broad range of AWS services and maintain familiarity with Azure to ensure cross-compatibility and optimal performance of our platforms Namespace Administration in EKS: Manage Kubernetes namespace-level operations within AWS EKS, including application deployment and environment configuration Requirements 3+ years in a similar role with proven expertise in Databricks, AWS, and preferably some exposure to Azure Strong background in MLOps, DevOps, and cloud (desirable if in a similar industry) Knowledge of AWS AI Services
Posted 6 days ago
5.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Project Role : Application Architect Project Role Description : Provide functional and/or technical expertise to plan, analyze, define and support the delivery of future functional and technical capabilities for an application or group of applications. Assist in facilitating impact assessment efforts and in producing and reviewing estimates for client work requests. Must have skills : Databricks Unified Data Analytics Platform Good to have skills : NA Minimum 5 Year(s) Of Experience Is Required Educational Qualification : 15 years full time education Summary: As an Application Architect, you will provide functional and/or technical expertise to plan, analyze, define, and support the delivery of future functional and technical capabilities for an application or group of applications. You will also assist in facilitating impact assessment efforts and in producing and reviewing estimates for client work requests. Roles & Responsibilities: - Expected to be an SME, collaborate and manage the team to perform. - Responsible for team decisions. - Engage with multiple teams and contribute on key decisions. - Provide solutions to problems for their immediate team and across multiple teams. - Lead the design and implementation of application solutions. - Ensure compliance with architectural standards and guidelines. - Identify opportunities for process improvement and innovation. - Mentor junior team members to enhance their skills. Professional & Technical Skills: - Must To Have Skills: Proficiency in Databricks Unified Data Analytics Platform. - Strong understanding of cloud-based data analytics platforms. - Experience in designing and implementing scalable data solutions. - Proficient in data modeling and database design. - Hands-on experience with data integration and ETL processes. Additional Information: - The candidate should have a minimum of 5 years of experience in Databricks Unified Data Analytics Platform. - This position is based at our Chennai office. - A 15 years full-time education is required.
Posted 6 days ago
12.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Project Role : Data Architect Project Role Description : Define the data requirements and structure for the application. Model and design the application data structure, storage and integration. Must have skills : Microsoft Azure Analytics Services Good to have skills : NA Minimum 12 Year(s) Of Experience Is Required Educational Qualification : 15 years full time education Summary: As a Data Architect, you will define the data requirements and structure for the application. Your typical day will involve modeling and designing the application data structure, storage, and integration, ensuring that the architecture aligns with business needs and technical specifications. You will collaborate with various teams to ensure that data flows seamlessly and efficiently throughout the organization, while also addressing any challenges that arise in the data management process. Your role will be pivotal in shaping the data landscape of the organization, enabling informed decision-making and strategic planning. Roles & Responsibilities: A. Function as the Lead Data Architect for a small, simple project/proposal or as a team lead for medium/large sized project or proposal B. Discuss specific Big data architecture and related issues with client architect/team (in area of expertise) C. Analyze and assess the impact of the requirements on the data and its lifecycle D. Lead Big data architecture and design medium-big Cloud based, Big Data and Analytical Solutions using Lambda architecture. E. Breadth of experience in various client scenarios and situations F. Experienced in Big Data Architecture-based sales and delivery G. Thought leadership and innovation H. Lead creation of new data assets & offerings I. Experience in handling OLTP and OLAP data workloads Professional & Technical Skills: A. Strong experience in Azure is preferred with hands-on experience in two or more of these skills : Azure Synapse Analytics, Azure HDInsight, Azure Databricks with PySpark / Scala / SparkSQL, Azure Analysis Services B. Experience in one or more Real-time/Streaming technologies including: Azure Stream Analytics, Azure Data Explorer, Azure Time Series Insights, etc. C. Experience in handling medium to large Big Data implementations D. Candidate must have around 5 years of extensive Big data experience E. Candidate must have 15 years of IT experience and around 5 years of extensive Big data experience (design + build) Additional Information: A. Should be able to drive the technology design meetings, propose technology design and architecture B. Should have excellent client communication skills C. Should have good analytical and problem-solving skills
Posted 6 days ago
15.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Project Role : Data Architect Project Role Description : Define the data requirements and structure for the application. Model and design the application data structure, storage and integration. Must have skills : Microsoft Azure Analytics Services Good to have skills : NA Minimum 15 Year(s) Of Experience Is Required Educational Qualification : 15 years full time education Summary: As a Data Architect, you will define the data requirements and structure for the application. Your typical day will involve modeling and designing the application data structure, storage, and integration, ensuring that the architecture aligns with business needs and technical specifications. You will collaborate with various teams to ensure that data flows seamlessly and efficiently throughout the organization, while also addressing any challenges that arise in the data management process. Your role will be pivotal in shaping the data landscape of the organization, enabling informed decision-making and strategic planning. Roles & Responsibilities: A. Function as the Lead Data Architect for a small, simple project/proposal or as a team lead for medium/large sized project or proposal B. Discuss specific Big data architecture and related issues with client architect/team (in area of expertise) C. Analyze and assess the impact of the requirements on the data and its lifecycle D. Lead Big data architecture and design medium-big Cloud based, Big Data and Analytical Solutions using Lambda architecture. E. Breadth of experience in various client scenarios and situations F. Experienced in Big Data Architecture-based sales and delivery G. Thought leadership and innovation H. Lead creation of new data assets & offerings I. Experience in handling OLTP and OLAP data workloads Professional & Technical Skills: A. Strong experience in Azure is preferred with hands-on experience in two or more of these skills : Azure Synapse Analytics, Azure HDInsight, Azure Databricks with PySpark / Scala / SparkSQL, Azure Analysis Services B. Experience in one or more Real-time/Streaming technologies including: Azure Stream Analytics, Azure Data Explorer, Azure Time Series Insights, etc. C. Experience in handling medium to large Big Data implementations D. Candidate must have around 5 years of extensive Big data experience E. Candidate must have 15 years of IT experience and around 5 years of extensive Big data experience (design + build) Additional Information: A. Should be able to drive the technology design meetings, propose technology design and architecture B. Should have excellent client communication skills C. Should have good analytical and problem-solving skills
Posted 6 days ago
0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Job Description Hands-on experience working with SAS to Python conversions. Strong mathematics and statistics skills. Skilled in AI-specific utilities like ChatGPT, Hugging Face Transformers, etc. Ability to understand business requirements. Use case derivation and solution creation from structured/unstructured data Storytelling, Business Communication, and Documentation Programming Skills – SAS, Python, Scikit-Learn, TensorFlow, PyTorch, Keras Exploratory Data Analysis Machine Learning and Deep Learning Algorithms Model building, Hyperparameter tuning, and Model performance metrics MLOps, Data Pipeline, Data Engineering Statistics Knowledge (Probability Distributions, Hypothesis Testing) Time series modeling, Forecasting, Image/Video Analytics, and Natural Language Processing (NLP). ML services from Clouds such as AWS, GCP, Azure, and Databricks Optional - Databricks, Big Data -Basic knowledge of Spark, Hive Roles & Responsibilities Responsible for SAS to python code conversion. Acquire skills required for building Machine learning models and deploy them for production. Feature Engineering, EDA, Pipeline creation, Model training, and hyperparameter tuning with structured and unstructured data sets. skills Develop and deploy cloud-based applications, including LLM/GenAI, into production. We are hiring for all locations - Indore / Noida / Gurgaon / Bangalore / Pune Kindly share your profiles with notice period and compensation details.
Posted 6 days ago
6.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Company Description Cling Multi Solutions empowers businesses through expert consultancy, innovative DevOps solutions, and specialized law and governance services. We are committed to quality and client satisfaction, leveraging a network of elite software developers and consultants. Our tailored solutions span Software Development, Information Security, Data Analytics, and more. Cling guides clients through complex digital and regulatory landscapes to achieve sustained success and growth. Role Description We are seeking a highly skilled and analytically strong Scrum Master + Site Reliability Engineer (SRE) with 6+ years of experience to join our team. The ideal candidate will have a proven track record in managing SRE responsibilities across multiple teams, with deep expertize in Active Directory (AD) groups, Databricks, Architecture design and enterprise tools like Clarity and ServiceNow . Strong Scrum delivery experience and cross functional collaboration are essential. Qualifications Certified Scrum Master or equivalent Agile Certification. Experience working in a global delivery model. Exposure to digital product and reporting services is a plus.
Posted 6 days ago
8.0 - 12.0 years
0 Lacs
India
Remote
Work Timings - Rotational Shifts (IST) Work Location - Remote Job Description Summary The Data engineer is responsible for managing and operating upon Tableau, Tableau bridge server, Databricks, Dbt, SSRS, SSIS, AWS DWS, AWS APP Flow, PowerBI . The engineer will work closely with the customer and team to manage and operate cloud data platform. Job Description Leads operational coverage: Resolving pipeline issues / Proactive monitoring for sensitive batches / RCA and retrospection of issues and documenting defects Design, build, test and deploy fixes to non-production environment for Customer testing. Work with Customer to deploy fixes on production upon receiving Customer acceptance of fix. Cost / Performance optimization and Audit / Security including any associated infrastructure changes Knowledge Management: Create/update Knopwledgebase/ runbooks as needed Governance: Watch all the configuration changes to batches and infrastructure (cloud platform) along with mapping it with proper documentation and aligning resources Communication: Lead and act as a POC for customer from on-site, handling communication, escalation, isolating issues and coordinating with off-site resources while level setting expectation across stakeholders Change Management: Align resources for on-demand changes and coordinate with stakeholders as required Request Management: Handle user requests – if the request is not runbook-based create a new KB or update runbook accordingly Incident Management and Problem Management, Root cause Analysis, coming up with preventive measures and recommendations such as enhancing monitoring or systematic changes as needed KNOWLEDGE/SKILLS/ABILITY: Good hands on Tableau, Tableau bridge server, Databricks, Dbt, SSRS, SSIS, AWS DWS, AWS APP Flow, PowerBI. Ability to read and write sql and stored procedures. Good hands on experience in configuring, managing and troubleshooting along with general analytical and problem solving skills. Excellent written and verbal communication skills. Ability to communicate technical info and ideas so others will understand. Ability to successfully work and promote inclusiveness in small groups JOB COMPLEXITY: This role requires extensive problem solving skills and the ability to research an issue, determine the root cause, and implement the resolution; research of various sources such as microsoft documentation that may be required to identify and resolve issues. Must have the ability to prioritize issues and multi-task SUPERVISION: Works under moderate supervision EXPERIENCE/EDUCATION: Requires a Bachelor’s degree in computer science or other related field plus 8-12 years of hands-on experience in configuring and mangaing azure data analytics solution. Experience with Azure environment is desired PHYSICAL DEMANDS: General office environment. No special physical demands required. Schedule flexibility to include working a weekend day regularly and holidays as required by the business for 24/7 operations. Occasional travel, less than 10% POLICY COMPLIANCE: Responsible for adhering to company security policies and procedures and any other relevant policies and standards
Posted 6 days ago
5.0 years
0 Lacs
Pune, Maharashtra, India
Remote
About the Team The Analytics Engineering team at DoorDash is embedded within the Analytics and Data Engineering Orgs, and is responsible for building internal data products that scale decision-making across business teams and drive efficiency in our operations. Data is fundamental to DoorDash's success, and this team plays a critical role in enabling high-impact, data-driven solutions across Product, Operations, Finance, and more. About the Role As an Analytics Engineer, you'll play a key role in building and scaling the data foundations that enable fast, reliable, and actionable insights. You'll work closely with partner teams to drive end-to-end analytics initiatives; working alongside Data Engineers, Data Scientists, Software Engineers, Product Managers, and Operators. This is a highly technical role where you'll be a driving force behind the analytics stack, delivering trusted data and metrics that support decision-making at all levels of the company. If you're energized by solving technical problems with data and comfortable being deeply embedded across several domains, this role is for you! You're excited about this opportunity because you will… Collaborate with data scientists, data engineers, and business stakeholders to understand business needs, and translate that scope into data requirements Identify key business questions and problems to solve for, and generate insights by developing structured solutions to resolve them Lead the development of data products and self-serve tools that enable analytics to scale across the company Build and maintain canonical datasets by developing high-volume, reliable ETL/ELT pipelines using data lake and data warehousing concepts Design metrics and data visualizations with dashboarding tools like Tableau, Sigma, and Mode Be a cross-functional champion at upholding high data integrity standards to increase reusability, readability and standardization We're excited about you because… 5+ years of experience working in business intelligence, analytics engineering, data engineering, or a similar role Strong proficiency in SQL for data transformation, comfort in at least one functional/OOP language such as Python or Scala Expertise in creating compelling reporting and data visualization solutions using dashboarding tools (e.g., Looker, Tableau, Sigma) Familiarity with database fundamentals (e.g., S3, Trino, Hive, Spark), and experience with SQL performance tuning Experience in writing data quality checks to validate data integrity (e.g., Pydeequ, Great Expectations) Excellent communication skills and experience working with technical and non-technical teams Comfortable working in fast fast-paced environment, self-starter, and self-organizer Ability to think strategically, analyze, and interpret market and consumer information Nice to Have Experience with modern data warehousing platforms (e.g., Snowflake, Databricks, Redshift) and ability to optimize performance Experience building multi-step ETL jobs coupled with orchestrating workflows (e.g. Airflow, Dagster, DBT) Familiarity with experimentation concepts like A/B testing and their data requirements Notice to Applicants for Jobs Located in NYC or Remote Jobs Associated With Office in NYC Only We use Covey as part of our hiring and/or promotional process for jobs in NYC and certain features may qualify it as an AEDT in NYC. As part of the hiring and/or promotion process, we provide Covey with job requirements and candidate submitted applications. We began using Covey Scout for Inbound from August 21, 2023, through December 21, 2023, and resumed using Covey Scout for Inbound again on June 29, 2024. The Covey tool has been reviewed by an independent auditor. Results of the audit may be viewed here: Covey About DoorDash At DoorDash, our mission to empower local economies shapes how our team members move quickly, learn, and reiterate in order to make impactful decisions that display empathy for our range of users—from Dashers to merchant partners to consumers. We are a technology and logistics company that started with door-to-door delivery, and we are looking for team members who can help us go from a company that is known for delivering food to a company that people turn to for any and all goods. DoorDash is growing rapidly and changing constantly, which gives our team members the opportunity to share their unique perspectives, solve new challenges, and own their careers. We're committed to supporting employees' happiness, healthiness, and overall well-being by providing comprehensive benefits and perks. Our Commitment to Diversity and Inclusion We're committed to growing and empowering a more inclusive community within our company, industry, and cities. That's why we hire and cultivate diverse teams of people from all backgrounds, experiences, and perspectives. We believe that true innovation happens when everyone has room at the table and the tools, resources, and opportunity to excel. If you need any accommodations, please inform your recruiting contact upon initial connection. About DoorDash At DoorDash, our mission to empower local economies shapes how our team members move quickly, learn, and reiterate in order to make impactful decisions that display empathy for our range of users—from Dashers to merchant partners to consumers. We are a technology and logistics company that started with door-to-door delivery, and we are looking for team members who can help us go from a company that is known for delivering food to a company that people turn to for any and all goods. DoorDash is growing rapidly and changing constantly, which gives our team members the opportunity to share their unique perspectives, solve new challenges, and own their careers. We're committed to supporting employees' happiness, healthiness, and overall well-being by providing comprehensive benefits and perks. Our Commitment to Diversity and Inclusion We're committed to growing and empowering a more inclusive community within our company, industry, and cities. That's why we hire and cultivate diverse teams of people from all backgrounds, experiences, and perspectives. We believe that true innovation happens when everyone has room at the table and the tools, resources, and opportunity to excel. If you need any accommodations, please inform your recruiting contact upon initial connection. We use Covey as part of our hiring and/or promotional process for jobs in certain locations. The Covey tool has been reviewed by an independent auditor. Results of the audit may be viewed here: https://getcovey.com/nyc-local-law-144 To request a reasonable accommodation under applicable law or alternate selection process, please inform your recruiting contact upon initial connection.
Posted 6 days ago
0 years
0 Lacs
Jaipur, Rajasthan, India
On-site
Senior Data Engineer + AI Job Summary: We are looking for a skilled and versatile Data Engineer with expertise in PySpark , Apache Spark , and Databricks , along with experience in analytics , data modeling , and Generative AI/Agentic AI solutions. This role is ideal for someone who thrives at the intersection of data engineering , AI systems , and business insights —contributing to high-impact programs with clients. Required Skills & Experience: Advanced proficiency in PySpark , Apache Spark , and Databricks for batch and streaming data pipelines. Strong experience with SQL for data analysis, transformation, and modeling. Expertise in data visualization and dashboarding tools (Power BI, Tableau, Looker). Solid understanding of data warehouse design , relational databases (PostgreSQL, Snowflake, SQL Server), and data lakehouse architectures. Exposure to Generative AI , RAG , embedding models , and vector databases (e.g., FAISS, Pinecone, ChromaDB). Experience with Agentic AI frameworks : LangChain, Haystack, CrewAI, or similar. Familiarity with cloud services for data and AI (Azure, AWS, or GCP). Excellent problem-solving and collaboration skills with an ability to bridge engineering and business needs. Preferred Skills: Experience with MLflow , Delta Live Tables , or other Databricks-native AI tools. Understanding of prompt engineering , LLM deployment , and multi-agent orchestration . Knowledge of CI/CD , Git , Docker , and DevOps pipelines. Awareness of Responsible AI , data privacy regulations , and enterprise data compliance . Background in consulting, enterprise analytics, or AI/ML product development. Key Responsibilities: Design, build, and optimize distributed data pipelines using PySpark , Apache Spark , and Databricks to support both analytics and AI workloads. Support RAG pipelines , embedding generation , and data pre-processing for LLM applications. Create and maintain interactive dashboards and BI reports using Power BI , Tableau , or Looker for business stakeholders and consultants. Conduct adhoc data analysis to drive data-driven decision making and enable rapid insight generation. Develop and maintain robust data warehouse schemas , star/snowflake models , and support data lake architecture . Integrate with and support LLM agent frameworks such as LangChain , LlamaIndex , Haystack , or CrewAI for intelligent workflow automation. Ensure data pipeline monitoring, cost optimization, and scalability in cloud environments (Azure/AWS/GCP). Collaborate with cross-functional teams including AI scientists, analysts, and business teams to drive use-case delivery. Maintain strong data governance , lineage , and metadata management practices using tools like Azure Purview or DataHub .
Posted 6 days ago
7.0 years
0 Lacs
Gurgaon, Haryana, India
Remote
Data Engineer III/ IV - IN (Operation/ Supports) Work Location - Remote Work From Home Shift: Rotation Shifts (24/7) Experience: 7-12 years Summary Job Description The Data engineer is responsible for managing and operating upon Databricks, Dbt, SSRS, SSIS, AWS DWS, AWS APP Flow, PowerBI/Tableau. The engineer will work closely with the customer and team to manage and operate cloud data platform. Job Description Leads Level 4 operational coverage: Resolving pipeline issues / Proactive monitoring for sensitive batches / RCA and retrospection of issues and documenting defects Design, build, test and deploy fixes to non-production environment for Customer testing. Work with Customer to deploy fixes on production upon receiving Customer acceptance of fix Cost / Performance optimization and Audit / Security including any associated infrastructure changes Troubleshooting incident/problem, includes collecting logs, cross-checking against known issues, investigate common root causes (for example failed batches, infra related items such as connectivity to source, network issues etc.) Knowledge Management: Create/update runbooks as needed Governance: Watch all the configuration changes to batches and infrastructure (cloud platform) along with mapping it with proper documentation and aligning resources Communication: Lead and act as a POC for customer from off-site, handling communication, escalation, isolating issues and coordinating with off-site resources while level setting expectation across stakeholders Change Management: Align resources for on-demand changes and coordinate with stakeholders as required Request Management: Handle user requests – if the request is not runbook-based create a new KB or update runbook accordingly Incident Management and Problem Management, Root cause Analysis, coming up with preventive measures and recommendations such as enhancing monitoring or systematic changes as needed Skill Good hands-on Databricks, Dbt, SSRS, SSIS, AWS DWS, AWS APP Flow, PowerBI/Tableau Ability to read and write sql and stored procedures Good hands-on experience in configuring, managing and troubleshooting along with general analytical and problem-solving skills Excellent written and verbal communication skills Ability to communicate technical info and ideas so others will understand. Ability to successfully work and promote inclusiveness in small groups Experience/Education Requires a Bachelor’s degree in computer science or other related field plus 10+ years of hands-on experience in configuring and managing AWS/tableau and databricks solutions Experience with Databricks and tableau environment is desired JOb Complexity This role requires extensive problem-solving skills and the ability to research an issue, determine the root cause, and implement the resolution; research of various sources such as databricks/AWS/tableau documentation that may be required to identify and resolve issues Must have the ability to prioritize issues and multi-task
Posted 6 days ago
1.5 years
0 Lacs
Pune, Maharashtra, India
On-site
About Improzo At Improzo ( Improve + Zoe; meaning Life in Greek ), we believe in improving life by empowering our customers. Founded by seasoned Industry leaders, we are laser focused on delivering quality-led commercial analytical solutions to our clients. Our dedicated team of experts in commercial data, technology, and operations has been evolving and learning together since our inception. Here, you won't find yourself confined to a cubicle; instead, you'll be navigating open waters, collaborating with brilliant minds to shape the future. You will work with leading Life Sciences clients, seasoned leaders and carefully chosen peers like you! People are at the heart of our success, so we have defined our CARE values framework with a lot of effort, and we use it as our guiding light in everything we do. We CARE ! Customer-Centric: Client success is our success. Prioritize customer needs and outcomes in every action. Adaptive: Agile and Innovative, with a growth mindset. Pursue bold and disruptive avenues that push the boundaries of possibilities. Respect: Deep respect for our clients & colleagues. Foster a culture of collaboration and act with honesty, transparency, and ethical responsibility. Execution: Laser focused on quality-led execution; we deliver! Strive for the highest quality in our services, solutions, and customer experiences. About The Role We are seeking a highly skilled Data and Reporting Developer (Improzo Level - Associate) to join our dynamic team. As a Big Data Developer, you will be responsible for designing, developing, and maintaining large-scale data processing systems using big data technologies. This is an exciting opportunity for a talented individual with a strong technical background and a passion for working with large datasets to deliver high-quality solutions. Key Responsibilities Big Data Application Development: Design, develop, and maintain scalable data pipelines and big data applications. Work with distributed processing frameworks (e.g., Apache Hadoop, Apache Spark) to process and analyze large datasets. Write optimized and high-performance code to handle data ingestion, processing, and analysis in real-time or batch processing environments. Data Architecture: Collaborate with data architects and other stakeholders to design and implement data storage solutions using HDFS, NoSQL databases (e.g., Cassandra, HBase, MongoDB), and cloud data platforms (e.g., AWS, Azure, Google Cloud). Develop and maintain ETL pipelines for data extraction, transformation, and loading (ETL) using ETL tools or Databricks Work with data lakes and data warehousing solutions for large-scale data storage and processing. Data Integration: Integrate various data sources into the big data ecosystem (e.g., data from relational databases, APIs, third-party tools, IoT devices). Ensure seamless data flow between systems while maintaining data quality and integrity. Reporting Development: Design and Build reports on tools like Power BI, Tableau, Microstrategy Design basic UI / UX as per client needs Performance Optimization: Optimize big data workflows and queries to ensure high performance and scalability. Implement data partitioning, indexing, and other techniques to handle large datasets efficiently. Collaboration and Communication: Collaborate with cross-functional teams (data scientists, analysts, engineers, etc.) to understand business requirements and deliver data solutions that meet those needs. Communicate complex technical concepts and data insights clearly to non-technical stakeholders. Testing and Quality Assurance: Perform unit testing and troubleshooting of data pipelines to ensure data consistency and integrity. Implement data validation and error-checking mechanisms to maintain high-quality data. Documentation: Maintain clear documentation of data pipelines, architecture, and workflows for ease of understanding, modification, and scaling. Agile Methodology: Participate in agile development processes, including sprint planning, daily stand-ups, and code reviews. Innovation and Research: Stay up-to-date with the latest trends and advancements in big data technologies and techniques. Continuously evaluate new tools, frameworks, and technologies to improve performance and capabilities. Qualifications Bachelor’s or master’s degree in a quantitative field such as computer science, statistics or mathematics. 1.5+ years of experience on data management or reporting projects involving big data technologies Hands on experience or thorough training on technologies like AWS, Azure, GCP, Databricks, Spark. Experience in a Pharma Commercial setting or Pharma data management will be an added advantage General proficiency in programming languages such Python; experience with data management (SQL, MDM, etc.) and visualization tools like Tableau, PowerBI, etc. Excellent communication, presentation, and interpersonal skills. Attention to details, biased for quality and client centricity. Ability to work independently and as part of a cross-functional team. Benefits Competitive salary and benefits package. Opportunity to work on cutting-edge tech projects, transforming the life sciences industry Collaborative and supportive work environment. Opportunities for professional development and growth. Skills: microstrategy,mongodb,aws,tableau,google cloud,sql,etl tools,databricks,cassandra,nosql databases,nosql database,apache spark,azure,python,distributed processing frameworks,big data application development,hbase,hdfs,mdm,power bi,apache hadoop
Posted 6 days ago
6.0 years
0 Lacs
Bangalore Urban, Karnataka, India
On-site
We at MakeMyTrip understand that every traveller is unique and being the leading OTA in India we have the leverage to redefine the travel booking experience to meet their need. If you love to travel and want to be a part of a dynamic team that works on personalizing every user's journey, then look no further. We are looking for a brilliant mind like yours to join our Data Platform team to build exciting data products at a scale where we solve for industry best and fault-tolerant feature stores, real-time data pipelines, catalogs, and much more. Hands-on: Spark, Scala Technologies: Spark, Aerospike, DataBricks, Kafka, Debezium, EMR, Athena, Glue, RocksDB, Redis, Airflow, MySQL, and any other data sources (e.g. Mongo, Neo4J, etc) used by other teams. Location: Gurgaon/Bengaluru Experience: 6+ years Industry Preference: E-Commerce
Posted 6 days ago
4.0 years
0 Lacs
Indore, Madhya Pradesh, India
Remote
🔥 Senior Data engineer Python - WFH THIS IS FULLY REMOTE WORKING OPPORTUNITY . WE NEED IMMEDIATE JOINERS or someone who can join in less than 1 month. If you are interested and fulfill the below mentioned criteria then please share below information . 1. EMAIL ID 2. PHONE NUMBER 3. YEARS OF RELEVANT EXPERIENCE. 4. UPDATED RESUME. 5. CCTC/ECTC 6. Notice period -Must to Have Technical Skillsets 4-5 years of experience: ● Python ● Databricks ● Spark ● Azure ● Exposure on data engineering and service development Good to Have Technical Skillsets ● GCP, AWS ● Kubernetes ● Pandas ************************** Pls note that we have a strong vetting process and tough interview rounds. we will only consider candidates skills with 5+ yrs of solid experience in Python Service Development experience, SQL, Pandas, Airflow, with Azure or other cloud. You will be thoroughly tested on these skills If you lack these skills then pls don't apply to save your time. If you are absolutely sure about these skills then send me above details
Posted 6 days ago
15.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Scope of the Role: The Principal Data Scientist will lead the architecture, development, and deployment of cutting-edge AI solutions, driving innovation in machine learning, deep learning, and generative AI. The role demands cutting edge expertise in advanced Gen AI development, Agentic AI development, optimization, and integration with enterprise-scale applications while fostering an experimental and forward-thinking environment. This senior level hands-on role offers an immediate opportunity to lead next-gen AI innovation, drive strategic AI initiatives, and shape the future of AI adoption at scale in large enterprise and industry applications. Reports To: Chief AI Officer Reportees: Individual Contributor Role Minimum Qualification: Bachelor’s or Master’s/PhD in Artificial Intelligence, Machine Learning, Data Science, Computer Science, or a related field. Advanced certifications in AI/ML frameworks, GenAI, Agentic AI, cloud AI platforms, or AI ethics preferred. Experience: A minimum of 15+ years of experience in AI/ML, with a proven track record in deploying AI-driven solutions – and deep expertise and specialty in Generative AI (use of proprietary and open source LLMs/ SLMs/ VLMs/ LCMs, experience in RAG, fine tuning and developing derived domain-specific models, multi agent frameworks and agentic architecture, LLMOps) Expertise in implementing the Data Solutions using Python programming, pandas, numpy, sklearn, pytorch, tensorflow, Data Visualizations, Machine Learning Algorithms, Deep Learning architectures, LLMs, SLM, VLM, LCM, generative AI, agents, prompt engineering, NLP, Transformer Architectures, GPTs, Computer Vision, and ML Ops for both unstructured and structured data, synthetic data generation Experience in building generic and customized Conversational AI Assistants Experience working in innovation-driven, research-intensive, or AI R&D-focused organizations. Experience in building AI solutions for Construction or Manufacturing, Oil & Gas industries would be good to have. Objective / Purpose: The Principal Data Scientist will drive breakthrough AI innovation for pan-L&T businesses, by developing the scalable, responsible, and production-ready Gen AI and Agentic AI models using RAG with multi LLMs and derived domain specific models where applicable. This role involves cutting-edge research, model deployment, AI infrastructure optimization, and AI strategy formulation to enhance business capabilities and user experiences. Key Responsibilities: AI Model Development & Deployment: Design, train, and deploy ML/DL models for predictive analytics, NLP, computer vision, generative AI and AI Agents. Applied AI Research & Innovation: Explore emerging AI technologies such as LLMs, RAG (Retrieval-Augmented Generation), fine tuning techniques, multi-modal AI, agentic architectures, reinforcement learning, and self-supervised learning with application-orientation. Model Optimization & Scalability: Optimize AI models for inference efficiency, explainability, and responsible AI compliance. AI Product Integration: Work collaboratively with business teams, data engineers, ML Ops teams, and software developers to integrate AI models into applications, APIs, and cloud platforms. AI Governance & Ethics: Ensure compliance with AI fairness, bias mitigation, and regulatory frameworks (GDPR, CCPA, AI Act). Cross-functional Collaboration: Partner with business teams, UX researchers, and domain experts to align AI solutions with real-world applications. AI Infrastructure & Automation: Develop automated pipelines, Agentic model monitoring, and CI/CD for AI Solutions . Technical Expertise: Machine Learning & Deep Learning: TensorFlow, PyTorch, Scikit-learn, Regression, Classification, clustering, Ensembling techniques – bagging, boosting, recommender systems, Probability distributions and data visualizations Generative AI & LLMs: OpenAI GPT, Google Gemini, Llama, Hugging Face Transformers, RAG, CAG, KAG, knowledge on Other LLMs, SLM, VLM, LCM, Langchain NLP & Speech AI: BERT, T5, Whisper Computer Vision: YOLO, OpenCV, CLIP, Convolutional Neural Networks(CNN). ML Ops & AI Infrastructure: ML flow, Kubeflow, Azure ML. Data Platforms : Databricks, Pinecone, FAISS, Elasticsearch, Semantic Search, Milvus, Weaviate Cloud AI Services: Azure Open AI & Cognitive Services. Explainability & Responsible AI: SHAP, LIME, FairML. Prompt Engineering Publish internal/external research papers, contribute to AI patents, and present at industry conferences and workshops. Evaluate Open-Source AI/ML frameworks and commercial AI products to enhance the organization’s AI capabilities. Behavioural Attributes: Business Acumen – Ability to align AI solutions with business goals. Market Foresight – Identifying AI trends and emerging technologies. Change Management – Driving AI adoption in dynamic environments. Customer Centricity – Designing AI solutions with user impact in mind. Collaborative Leadership – Working cross-functionally with diverse teams. Ability to Drive Innovation & Continuous Improvement – Research-driven AI development. Key Value Drivers: Advancing AI-driven business transformation – securely, optimally and at scale Reducing time-to-value for AI-powered innovations. Enabling AI governance, compliance, and ethical AI adoption. Future Career Path : The future career path will be establishing oneself as a world class SME having deep domain experience and thought leadership around application of next gen AI technologies in the construction, energy and manufacturing domains. Career would progress from technology specialization into leadership roles, in various challenging positions such as Head – AI Strategy and Chief AI Officer, in driving the enterprise AI strategies, governance, and innovation for various ICs/BUs across the pan L&T Businesses.
Posted 6 days ago
10.0 - 14.0 years
0 Lacs
vijayawada, andhra pradesh
On-site
As a Lead Data Engineer based in Vijayawada, Andhra Pradesh, you will be responsible for leveraging your extensive experience in data engineering and data architecture to design and develop end-to-end data solutions, data pipelines, and ETL processes. With a Bachelor's or Master's degree in Computer Science, Information Systems, or a related field, along with over 10 years of relevant experience, you will play a crucial role in ensuring the success of data projects. You will demonstrate your strong knowledge in data technologies such as Snowflake, Databricks, Apache Spark, Hadoop, Dbt, Fivetran, and Azure Data Factory. Your expertise in Python and SQL will be essential in tackling complex data challenges. Furthermore, your understanding of data governance, data quality, and data security principles will guide you in maintaining high standards of data management. In this role, your excellent problem-solving and analytical skills will be put to the test as you work both independently and collaboratively in an Agile environment. Your strong communication and leadership skills will be instrumental in managing projects, teams, and engaging in pre-sales activities. You will have the opportunity to showcase your technical leadership abilities by delivering solutions within defined timeframes and building strong client relationships. Moreover, your experience in complete project life cycle activities, agile methodologies, and working with globally distributed teams will be valuable assets in this position. Your proven track record of success in managing complex consulting projects and your ability to effectively communicate with technical and non-technical staff will contribute to the overall success of the team. If you are looking for a challenging role that combines technical expertise, leadership skills, and client engagement, this Lead Data Engineer position offers a dynamic opportunity to excel in a fast-paced and collaborative environment.,
Posted 6 days ago
2.0 - 8.0 years
0 Lacs
hyderabad, telangana
On-site
You will be responsible for designing, developing, and maintaining data pipelines and ETL workflows for processing large-scale structured/unstructured data. Your expertise in AWS Data Services (S3, Workflows, Databricks, SQL), big data processing, real-time analytics, and cloud data integration is crucial for this role. Additionally, your experience in Team Leading will be valuable. Your key responsibilities will include redesigning optimized and scalable ETL using Spark, Python, SQL, UDF, implementing ETL/ELT Databricks workflows for structured data processing, and creating Data quality checks using Unity Catalog. You will also be expected to drive daily status calls, sprint planning meetings, and ensure the security, quality, and compliance of data pipelines. Collaborating with data architects and analysts to meet business requirements is also a key aspect of this role. To qualify for this position, you should have at least 8 years of experience in data engineering, with a minimum of 2 years working on AWS services. Hands-on experience with tools like S3, Databricks, or Workflows is essential. Knowledge in Adverity, experience with ticketing tools like Asana or JIRA, and data analyzing skills are considered advantageous. Strong SQL and data processing skills (e.g., PySpark, Python) are required, along with experience in data cataloging, lineage, and governance frameworks. Your contribution to CI/CD integration, observability, and documentation, as well as your ability to quickly analyze issues and drive collaboration within the team, will be instrumental in achieving the goals of the organization.,
Posted 6 days ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
32455 Jobs | Dublin
Wipro
16590 Jobs | Bengaluru
EY
11025 Jobs | London
Accenture in India
10991 Jobs | Dublin 2
Amazon
8878 Jobs | Seattle,WA
Uplers
8715 Jobs | Ahmedabad
IBM
8204 Jobs | Armonk
Oracle
7750 Jobs | Redwood City
Capgemini
6181 Jobs | Paris,France
Muthoot FinCorp (MFL)
6170 Jobs | New Delhi