Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
0 years
0 Lacs
Sonipat, Haryana, India
On-site
About the Role Overview: Newton School of Technology is on a mission to transform technology education and bridge the employability gap. As India’s first impact university, we are committed to revolutionizing learning, empowering students, and shaping the future of the tech industry. Backed by renowned professionals and industry leaders, we aim to solve the employability challenge and create a lasting impact on society. We are currently looking for a Data Engineer + Subject Matter Expert – Data Mining to join our Computer Science Department. This is a full-time academic role focused on data mining, analytics, and teaching/mentoring students in core data science and engineering topics. Key Responsibilities: ● Develop and deliver comprehensive and engaging lectures for the undergraduate "Data Mining", “BigData” and “Data Analytics” courses, covering the full syllabus from foundational concepts to advanced techniques. ● Instruct students on the complete data lifecycle, including data preprocessing, cleaning, transformation, and feature engineering. ● Teach the theory, implementation, and evaluation of a wide range of algorithms for Classification, Association rules mining, Clustering and Anomaly Detections. ● Design and facilitate practical lab sessions and assignments that provide students with hands-on experience using modern data tools and software. ● Develop and grade assessments, including assignments, projects, and examinations, that effectively measure the Course Learning Objectives (CLOs). ● Mentor and guide students on projects, encouraging them to work with real-world or benchmark datasets (e.g., from Kaggle). ● Stay current with the latest advancements, research, and industry trends in data engineering and machine learning to ensure the curriculum remains relevant and cutting-edge. ● Contribute to the academic and research environment of the department and the university. Required Qualifications: ● A Ph.D. (or a Master's degree with significant, relevant industry experience) in Computer Science, Data Science, Artificial Intelligence, or a closely related field. ● Demonstrable expertise in the core concepts of data engineering and machine learning as outlined in the syllabus. ● Strong practical proficiency in Python and its data science ecosystem, specifically Scikit-learn, Pandas, NumPy, and visualization libraries (e.g., Matplotlib, Seaborn). ● Proven experience in teaching, preferably at the undergraduate level, with an ability to make complex topics accessible and engaging. ● Excellent communication and interpersonal skills. Preferred Qualifications: ● A strong record of academic publications in reputable data mining, machine learning, or AI conferences/journals. ● Prior industry experience as a Data Scientist, Big Data Engineer, Machine Learning Engineer, or in a similar role. ● Experience with big data technologies (e.g., Spark, Hadoop) and/or deep learning frameworks (e.g., TensorFlow, PyTorch). ● Experience in mentoring student teams for data science competitions or hackathons. Perks & Benefits: ● Competitive salary packages aligned with industry standards. ● Access to state-of-the-art labs and classroom facilities. To know more about us, feel free to explore our website: Newton School of Technology We look forward to the possibility of having you join our academic team and help shape the future of tech education!
Posted 5 days ago
8.0 years
0 Lacs
Sholinganallur, Tamil Nadu, India
On-site
Role: MLE + Vertex AI Mode : Permanent - Full time Exp: 4- 8 years Job Description: The candidate should be a self-starter and be able to contribute independently in the absence of any guidance. strong vertex ai experience, moving multiple MLE workloads on to vertex ai is a pre-requisite. The client is not looking to act as guides/mentors. “They are seeking an MLE with hands-on experience in delivering machine learning solutions using Vertex AI and strong Python skills. The person must have 5+ years of experience, with 3+ in MLE. Advanced knowledge of machine learning, engineering industry frameworks, and professional standards. Demonstrated proficiency using cloud technologies and integrating with ML services including GCP Vertex AI, DataRobot or AWS SageMaker in large and complex organisations and experience with SQL and Python environments. Experience in Technology delivery, waterfall and agile. Python and SQL skills. Experience with distributed programming (e.g. Apache Spark, Pyspark) . Software engineering experience/skills. Experience working with big data cloud platforms (Azure, Google Cloud Platform, AWS). DevOps experience. CI/CD experience. Experience with Unit Testing, TDD. Experience with Infrastructure as code. Direct client interaction. Must Have Skills: Vertex AI, MLE, AWS, PYTHON, SQL Interested candidates can reach us @7338773388 or careers@w2ssolutions.com & hr@way2smile.com
Posted 5 days ago
6.0 years
0 Lacs
Gujarat, India
On-site
Job Summary: We are looking for a highly skilled and self-motivated Technical Lead - Data, Cloud & AI Lead - to design, develop, and optimize data pipelines and infrastructure that power AI/ML solutions in the cloud. The ideal candidate will have deep experience in data engineering, strong exposure to cloud platforms, and familiarity with machine learning workflows. FinOps and related experience is preferred. This role will play a critical part in enabling data-driven innovation and scaling intelligent applications across the organization . Required Skills & Experience: 6+ years of experience in data engineering with a strong understanding of data architecture Hands-on experience with cloud platforms: AWS (Glue, S3, Redshift) or GCP (BigQuery, Dataflow) Strong programming skills in Python, Java, SQL; knowledge of Spark or Scala is a plus Experience with ETL/ELT tools and orchestration frameworks like Apache Airflow, DBT, or Prefect Familiarity with machine learning workflows, model lifecycle, and MLOps practices Proficient in working with both batch and streaming data (Kafka, Kinesis, Pub/Sub) Experience with containerization and deployment (Docker, Kubernetes is a plus) Good understanding of data security and access control in cloud environments
Posted 5 days ago
5.0 years
0 Lacs
Ahmedabad, Gujarat, India
On-site
Company Description At Innovatics, we help conquer tough business challenges with advanced analytics and AI. Specializing in transforming complexity into clarity and business uncertainties into data-driven opportunities, our dedicated team of data analytics and AI consultants are committed to achieving tangible results. Our services, provided in the USA, Australia, Canada, and India, include end-to-end data analytics, data strategy, data engineering, and AI consulting. Passionate about data, we pride ourselves on turning ideas into actionable insights. Role Description This is a full-time on-site role located in Ahmedabad for a Sr. Data Engineer at Innovatics. The Sr. Data Engineer will be responsible for the design, development, and optimization of data architectures and pipelines. Day-to-day tasks include data modeling, building and managing ETL processes, data warehousing, and performing data analytics to support business decisions. The role involves collaborating with data scientists, analysts, and other stakeholders to ensure efficient and effective data solutions. Job Description: 5+ years of experience in a Data Engineer role Experience with object-oriented/object function scripting languages: Python, Scala, Golang, Java, etc. Experience with Big data tools such as Spark, Hadoop/ Kafka/ Airflow/Hive Experience with Streaming data: Spark/Kinesis/Kafka/Pubsub/Event Hub Experience with GCP/Azure data factory/AWS Strong in SQL Scripting Experience with ETL tools Knowledge of Snowflake Data Warehouse Knowledge of Orchestration frameworks: Airflow/Luig Good to have knowledge of Data Quality Management frameworks Good to have knowledge of Master Data Management Self-learning abilities are a must Familiarity with upcoming new technologies is a strong plus. Should have a bachelor's degree in big data analytics, computer engineering, or a related field Candidates Attribute: Experience in Data Engineering, including design and development of data architectures Proficiency in Data Modeling to support the data needs of various projects Skills in Extract Transform Load (ETL) processes to ensure smooth data integration Knowledge of Data Warehousing to manage and store large datasets efficiently Strong Data Analytics skills to derive actionable insights from data Excellent problem-solving and analytical skills Ability to work independently and collaboratively in a team environment Bachelor's or master's degree in Computer Science, Engineering, or related field Experience in the AI and advanced analytics field is a plus
Posted 5 days ago
5.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Number of positions – 5 Location – Cairo (Egypt) , Non-Egyptian resources Experience - 5+ Years Interview process – 1 (internal)+1(External)+1(fitment) Contract period – 1 years Job Summary: We are seeking a seasoned MLOps Engineer who is willing to relocate to Egypt to operationalize and scale machine learning solutions across the enterprise. The ideal candidate will have deep expertise in deploying ML models in production environments using tools like MLflow , managing large-scale data infrastructure on Cloudera/Hadoop , and enabling collaboration between data science and engineering teams through CI/CD pipelines on Azure DevOps . Familiarity with Knowledge Graphs and enterprise data lineage is a strong advantage. Key Responsibilities: Design, implement, and manage end-to-end ML pipelines , ensuring scalability, reliability, and reproducibility. Build robust CI/CD pipelines using Azure DevOps to support model training, testing, deployment, and monitoring. Integrate model tracking, versioning, and lifecycle management using MLflow . Work with Cloudera and Hadoop ecosystem tools (HDFS, Hive, Spark) to handle large-scale data ingestion and transformation for ML use cases. Develop and manage Knowledge Graph pipelines to enrich metadata and model dependencies. Collaborate with data scientists to productionize models and ensure governance, auditability, and traceability. Implement monitoring solutions for model drift , performance , and data quality in production environments. Support infrastructure automation and DevOps practices aligned with enterprise security and compliance standards. Ensure alignment with data governance, privacy regulations, and organizational best practices. Required Skills and Qualifications: 6–8 years of experience in Machine Learning Engineering or MLOps roles. Proficiency with MLflow for tracking, registering, and deploying models. Strong hands-on experience with Cloudera and Hadoop platforms. Experience with Azure DevOps for source control, pipeline automation, and deployment. Working knowledge of Knowledge Graphs and metadata-driven AI operations. Experience with containerization and orchestration tools (Docker, Kubernetes). Proficiency in Python and scripting for automation and pipeline development. Familiarity with ML frameworks such as scikit-learn, TensorFlow, PyTorch . Good understanding of data security , access control , and audit logging . Preferred Qualifications: Azure certifications such as Azure Data Engineer or Azure AI Engineer Associate . Experience working in regulated industries (e.g., BFSI, Pharma, Healthcare). Exposure to Apache NiFi , Airflow , or similar orchestration tools. Soft Skills: Strong problem-solving and analytical skills. Excellent communication and stakeholder management. Passion for automation and continuous improvement.
Posted 5 days ago
0 years
0 Lacs
Gurugram, Haryana, India
On-site
Position Summary Data engineers are mainly responsible for designing, building, managing, and operationalizing data pipelines to support key data and analytics use cases. They play a crucial role in constructing and maintaining a modern, scalable data platform that utilizes the full capabilities of a Lakehouse Platform. You will be a key contributor to our data-driven organization, playing a vital role in both building a modern data platform and maintaining our Enterprise Data Warehouse (EDW). You will leverage your expertise in the Lakehouse Platform to design, develop, and deploy scalable data pipelines using modern and evolving technologies. Simultaneously, you will take ownership of the EDW architecture, ensuring its performance, scalability, and alignment with evolving business needs. Your responsibilities will encompass the full data lifecycle, from ingestion and transformation to delivery of high-quality datasets that empower analytics and decision-making. Duties and responsibilities Build data pipelines using Azure Databricks: Build and maintain scalable data pipelines and workflows within the Lakehouse environment. Transform, cleanse, and aggregate data using Spark SQL or PySpark. Optimize Spark jobs for performance, cost efficiency, and reliability. Develop and manage Lakehouse tables for efficient data storage and versioning. Utilize notebooks for interactive data exploration, analysis, and development. Implement data quality checks and monitoring to ensure accuracy and reliability. Drive Automation: Implement automated data ingestion processes using functionality available in the data platform, optimizing for performance and minimizing manual intervention. Design and implement end-to-end data pipelines, incorporating transformations, data quality checks, and monitoring. Utilize CI/CD tools (Azure DevOps/GitHub Actions) to automate pipeline testing, deployment, and version control. Enterprise Data Warehouse (EDW) Management: Create and maintain data models, schemas, and documentation for the EDW. Collaborate with data analysts, data scientists and business stakeholders to gather requirements, design data marts, and provide support for reporting and analytics initiatives. Troubleshoot and resolve any issues related to data loading, transformation, or access within the EDW. Educate and train: The data engineer should be curious and knowledgeable about new data initiatives and how to address them. This includes applying their data and/or domain understanding in addressing new data requirements. They will also be responsible for proposing appropriate (and innovative) data ingestion, preparation, integration and operationalization techniques in addressing these data requirements. The data engineer will be required to train counterparts in these data pipelining and preparation techniques. Ensure compliance with data governance and security: The data engineer is responsible for ensuring that the data sets provided to users are compliant with established governance and security policies. Data engineers should work with data governance and data security teams while creating new and maintaining existing data pipelines to guarantee alignment and compliance. Qualifications Education Bachelor or master's in computer science, Information Management, Software Engineering, or equivalent work experience. Work Experience At least four years or more of working in data management disciplines including: data integration, modeling, optimization and data quality, and/or other areas directly relevant to data engineering responsibilities and tasks. At least three years of experience working in cross-functional teams and collaborating with business stakeholders in support of a departmental and/or multi-departmental data management and analytics initiative. Technical knowledge, Abilities, and skills Ability to design, build and manage data pipelines for data structures encompassing data transformation, data models, schemas, metadata, and workload management. The ability to work with both IT and business in integrating analytics and data science output into business processes and workflows. Strong knowledge of database programming languages and hands on experience with any RDBMS.
Posted 5 days ago
0 years
0 Lacs
Kolkata, West Bengal, India
On-site
Inviting applications for the role of Senior Principal Consultant, Data Scientist for one of our Client (MNC) In this role, we are looking for candidates who have relevant years of experience in Text Mining / Natural Language Processing (NLP) tools, Data sciences, Big Data and algorithms. Full cycle experience desirable in at least 1 Large Scale Text Mining/NLP project from creating a Business use case, Text Analytics assessment/roadmap, Technology & Analytic Solutioning, Implementation and Change Management, considerable experience in Hadoop including development in map-reduce framework. The Text Mining Scientist (TMS) is expected to play a pivotal bridging role between enterprise database teams, and business /functional resources. At a broad level, the TMS will leverage his/her solutioning expertise to translate the customer’s business need into a techno-analytic problem and appropriately work with database teams to bring large scale text analytic solutions to fruition. The right candidate should have prior experience in developing text mining and NLP solutions using open source tools. Responsibilities Develop transformative AI/ML solutions to address our clients' business requirements and challenges Project Delivery - This would entail successful delivery of projects involving data Pre-processing, Model Training and Evaluation, Parameter Tuning Manage Stakeholder/Customer Expectations Project Blue Printing and Project Documentation Creating Project Plan Understand and research cutting edge industrial and academic developments in AI/ML with NLP/NLU applications in diverse industries such as CPG, Finance etc. Conceptualize, Design, build and develop solution algorithms which demonstrate the minimum required functionality within tight timelines Interact with clients to collect, synthesize, and propose requirements and create effective analytics/text mining roadmap. Work with digital development teams to integrate and transform these algorithms into production quality applications Do applied research on a wide array of text analytics and machine learning projects, file patents and publish the papers Collaborate with service line teams to design, implement and manage Gen-AI solution Familiarity with generative models, prompt engineering, and fine-tuning techniques to develop innovative AI solutions. Designing, developing, and implementing solutions tailored to meet client needs. Understanding business requirements and translating them into technical solutions using GEN AI Works closely with service line teams to design, implement, and manage Generative AI solutions Qualifications we seek in you! Minimum Qualifications / Skills MS in Computer Science, Information systems, or Computer engineering Systems Engineering with relevant experience in Text Mining / Natural Language Processing (NLP) tools, Data sciences, Big Data and algorithms Familiarity with Generative AI technologies, Design and Implement GenAI Solutions Technology Open Source Text Mining paradigms such as NLTK, OpenNLP, OpenCalais, StanfordNLP, GATE, UIMA, Lucene, and cloud based NLU tools such as DialogFlow, MS LUIS Exposure to Statistical Toolkits such as R, Weka, S-Plus, Matlab, SAS-Text Miner Strong Core Java experience in large scale product development and functional knowledge of RDBMs Hands on to programing in the Hadoop ecosystem, and concepts in distributed computing Very good python/R programming skills. Java programming skills a plus GenAI Tools Certifications in AI/ML or GenAI Methodology Solutioning & Consulting experience in verticals such as BFSI, CPG, with experience in delivering text analytics on large structured and unstructured data A solid foundation in AI Methodologies like ML, DL, NLP, Neural Networks, Information Retrieval and Extraction, NLG, NLU Exposed to concepts in Natural Language Processing & Statistics, esp., in their application such as Sentiment Analysis, Contextual NLP, Dependency Parsing, Parsing, Chunking, Summarization, etc Demonstrated ability to Conduct look-ahead client research with focus on supplementing and strengthening the client’s analytics agenda with newer tools and techniques Preferred Qualifications/ Skills Technology Expert level of understanding of NLP, NLU and Machine learning/Deep learning methods OpenNLP, OpenCalais, StanfordNLP, GATE, UIMA, Lucene, NoSQL UI development paradigms that would enable Text Mining Insights Visualization, e.g., Adobe Flex Builder, HTML5, CSS3 GenAI AI/ML Tools Linux, Windows, GPU Experience Spark, Scala for distributed computing Deep learning frameworks such as TensorFlow, Keras, Torch, Theano Certifications in AI/ML or GenAI Methodology Social Network modeling paradigms, tools & techniques Text Analytics using Natural Language Processing tools such as Support Vector Machines and Social Network Analysis Previous experience with Text analytics implementations, using open source packages and or SAS-Text Miner Ability to Prioritize, Consultative mindset & Time management skills
Posted 5 days ago
0 years
0 Lacs
Kolkata, West Bengal, India
On-site
Company Vision: NowPurchase is transforming the $140B Metal Manufacturing industry. The metal industry forms the backbone of the economy and the fundamental block of the physical world - be it transportation, construction, and every machinery. NowPurchase is a rich, digital marketplace where metal manufacturers (foundries + steel plants) can procure high-quality raw materials (scrap, pig iron, ferroalloys, additives, nodularisers) in a trusted manner. Our technology allows them to optimize their manufacturing process to ensure high productivity and resilience to failure. We currently serve over 250 factories nationwide and are looking to aggressively expand our footprint across India. You can learn more on www.nowpurchase.com. Job Description Position – Senior Executive - Marketing Reporting to – Deputy Manager Experience – 2-4 Yrs Location – Kolkata No. of Positions – 1no Qualification – Any Graduate NowPurchase is expanding rapidly across industrial clusters. To fuel this growth, we need a marketing leader who can bridge business goals with creative execution , ensuring that every email, social post, event, and WhatsApp campaign doesn’t just reach, but resonates. Our ideal teammate is someone who understands that marketing is the engine of visibility and trust . You’ll build our presence across platforms, create hook-rich content that educates and excites our MOA (Manufacturers of Aspirations), and launch campaigns that lead to real sales conversations, audits, and partnerships. Lead NowPurchase’s content creation across LinkedIn, YouTube, Instagram, and Facebook—crafting posts, stories, and campaigns that educate, entertain, and convert. Launch powerful WhatsApp & Email Marketing Campaigns that are sequenced with our buyer journey—driving inquiries, nurturing leads, and accelerating decision-making. Develop and execute Marketing Content Strategies that spark curiosity and generate high-intent leads. Collaborate with the founding team to craft NowPurchase’s evolving narrative—ensuring consistent branding, tone, and positioning across all communication. Own event marketing —from planning and vendor coordination to live engagement and post-event follow-up, making every offline interaction unforgettable. Use data and analytics to evaluate campaign performance , refine messaging, and optimize lead conversion. Desired Attributes: Think in hooks and headlines. You can turn technical concepts into click-worthy content. Have experience running email or WhatsApp campaigns and analyzing funnel performance. Are excited to create stories from metrics, customer wins, and product benefits . Take initiative and own projects end-to-end —from the first draft to the last follow-up. Are curious about manufacturing, metals, and how tech is changing traditional industries. Compensation & Benefits: Compensation: As per industry standards & pedigree of the candidate Group Medical Insurance: This is over and above compensation. 3 lakhs floater for the family including parents, spouse, children. Top Up option is also available upon personal request. Generous leave structure including maternity & paternity leaves Snacks on the house Hiring Process Screening of applicants & telephonic discussion with HR. F2F/Video discussion with Hiring Manager. Final round interview with Director. Email communication on final feedback.
Posted 5 days ago
3.0 years
0 Lacs
Thiruvananthapuram Taluk, India
On-site
Position- Data Engineer Experience- 3+ years Location : Trivandrum, Hybrid Salary : Upto 8 LPA Job Summary: We are seeking a highly motivated and skilled Data Engineer with 3+ years of experience to join our growing data team. In this role, you will be instrumental in designing, building, and maintaining robust, scalable, and efficient data pipelines and infrastructure. You will work closely with data scientists, analysts, and other engineering teams to ensure data availability, quality, and accessibility for various analytical and machine learning initiatives. Key Responsibilities: ● Design and Development: ○ Design, develop, and optimize scalable ETL/ELT pipelines to ingest, transform, and load data from diverse sources into data warehouses/lakes. ○ Implement data models and schemas that support analytical and reporting requirements. ○ Build and maintain robust data APIs for data consumption by various applications and services. ● Data Infrastructure: ○ Contribute to the architecture and evolution of our data platform, leveraging cloud services (AWS, Azure, GCP) or on-premise solutions. ○ Ensure data security, privacy, and compliance with relevant regulations. ○ Monitor data pipelines for performance, reliability, and data quality, implementing alerting and anomaly detection. ● Collaboration & Optimization: ○ Collaborate with data scientists, business analysts, and product managers to understand data requirements and translate them into technical solutions. ○ Optimize existing data processes for efficiency, cost-effectiveness, and performance. ○ Participate in code reviews, contribute to documentation, and uphold best practices in data engineering. ● Troubleshooting & Support: ○ Diagnose and resolve data-related issues, ensuring minimal disruption to data consumers. ○ Provide support and expertise to teams consuming data from the data platform. Required Qualifications: ● Bachelor's degree in Computer Science, Engineering, or a related quantitative field. ● 3+ years of hands-on experience as a Data Engineer or in a similar role. ● Strong proficiency in at least one programming language commonly used for data engineering (e.g., Python, Java, Scala). ● Extensive experience with SQL and relational databases (e.g., PostgreSQL, MySQL, SQL Server). ● Proven experience with ETL/ELT tools and concepts. ● Experience with data warehousing concepts and technologies (e.g., Snowflake, Redshift, BigQuery, Azure Synapse, Data Bricks). ● Familiarity with cloud platforms (AWS, Azure, or GCP) and their data services (e.g., S3, EC2, Lambda, Glue, Data Factory, Blob Storage, BigQuery, Dataflow). ● Understanding of data modeling techniques (e.g., dimensional modeling, Kimball, Inmon). ● Experience with version control systems (e.g., Git). ● Excellent problem-solving, analytical, and communication skills. Preferred Qualifications: ● Master's degree in a relevant field. ● Experience with Apache Spark (PySpark, Scala Spark) or other big data processing frameworks. ● Familiarity with NoSQL databases (e.g., MongoDB, Cassandra). ● Experience with data streaming technologies (e.g., Kafka, Kinesis). ● Knowledge of containerization technologies (e.g., Docker, Kubernetes). ● Experience with workflow orchestration tools (e.g., Apache Airflow, Azure Data Factory, AWS Step Functions). ● Understanding of DevOps principles as applied to data pipelines. ● Prior experience in Telecom is a plus.
Posted 5 days ago
0 years
0 Lacs
Chandigarh, India
On-site
About Us: Planet Spark is reshaping the EdTech landscape by equipping kids and young adults with future-ready skills like public speaking, and more. We're on a mission to spark curiosity, creativity, and confidence in learners worldwide. If you're passionate about meaningful impact, growth, and innovation—you're in the right place. Location: Gurgaon (On-site) Experience Level: Entry to Early Career (Freshers welcome!) Shift Options: Domestic | Middle East | International Working Days: 5 days/week (Wednesday & Thursday off) | Weekend availability required Target Joiners: Any (Bachelor’s or Master’s) 🔥 What You'll Be Owning (Your Impact): Lead Activation: Engage daily with high-intent leads through dynamic channels—calls, video consults, and more. Sales Funnel Pro: Own the full sales journey—from first hello to successful enrollment. Consultative Selling: Host personalized video consultations with parents/adult learners, pitch trial sessions, and resolve concerns with clarity and empathy. Target Slayer: Consistently crush weekly revenue goals and contribute directly to Planet Spark’s growth engine. Client Success: Ensure a smooth onboarding experience and transition for every new learner. Upskill Mindset: Participate in hands-on training, mentorship, and feedback loops to constantly refine your game. 💡 Why Join Sales at Planet Spark? Only Warm Leads: Skip the cold calls—our leads already know us and have completed a demo session. High-Performance Culture: Be part of a fast-paced, energetic team that celebrates success and rewards hustle. Career Fast-Track: Unlock rapid promotions, performance bonuses, and leadership paths. Top-Notch Training: Experience immersive onboarding, live role-plays, and access to ongoing L&D programs. Rewards & Recognition: Weekly shoutouts, cash bonuses, and exclusive events to celebrate your wins. Make Real Impact: Help shape the minds of tomorrow while building a powerhouse career today. 🎯 What You Bring to the Table: Communication Powerhouse: You can build trust and articulate ideas clearly in both spoken and written formats. Sales-Driven: You know how to influence decisions, navigate objections, and close deals with confidence. Empathy First: You genuinely care about clients’ goals and tailor your approach to meet them. Goal-Oriented: You’re self-driven, proactive, and hungry for results. Tech Fluent: Comfortable using CRMs, video platforms, and productivity tools. ✨ What’s in It for You? 💼 High-growth sales career with serious earning potential 🌱 Continuous upskilling in EdTech, sales, and communication 🧘 Supportive culture that values growth and well-being 🎯 Opportunity to work at the cutting edge of education innovation
Posted 5 days ago
6.0 years
0 Lacs
Pune, Maharashtra, India
On-site
About Client: Our Client is a multinational IT services and consulting company headquartered in USA, With revenues 19.7 Billion USD, with Global work force of 3,50,000 and Listed in NASDAQ, It is one of the leading IT services firms globally, known for its work in digital transformation, technology consulting, and business process outsourcing, Business Focus on Digital Engineering, Cloud Services, AI and Data Analytics, Enterprise Applications ( SAP, Oracle, Sales Force ), IT Infrastructure, Business Process Out Source. Major delivery centers in India, including cities like Chennai, Pune, Hyderabad, and Bengaluru. Offices in over 35 countries. India is a major operational hub, with as its U.S. headquarters. Job Title : Big Data Testing Key Skills : Sql,Data Testing,Etl Testing Job Locations : Pune Experience : 6 Years to 12 Years. Education Qualification : Any Graduation. Work Mode : Hybrid. Employment Type : Contract. Notice Period : Immediate Job Summary : We are seeking a highly skilled Big Data Tester to join our dynamic team. The ideal candidate will be responsible for validating and ensuring the quality of big data solutions across various platforms. You will work with cutting-edge tools and technologies to test large-scale distributed systems, ensuring the accuracy, performance, and reliability of data in complex ecosystems. Key Responsibilities: Test Planning and Strategy : Design, develop, and execute test plans and test cases for big data applications. Work closely with the development team to understand the system architecture and data flow. Define testing strategies for different big data technologies such as Hadoop, Spark, Hive, etc. Big Data Testing : Perform functional, regression, performance, and integration testing for big data applications. Ensure data accuracy and integrity in data processing and storage. Validate ETL processes, data pipelines, and transformations in large-scale distributed systems. Automation and Tooling : Automate test cases using big data-specific tools like Apache JMeter, Selenium, or custom scripts. Leverage frameworks like TestNG , JUnit , or PyTest to automate functional and regression tests. Use data validation and comparison tools for validating large datasets and ensuring accuracy. Performance and Scalability Testing : Conduct performance tests to assess the scalability and speed of big data platforms. Perform load and stress testing on large datasets to ensure platform reliability under high load. Issue Reporting & Collaboration : Document defects, track their progress, and work closely with developers to resolve issues. Collaborate with data engineers, developers, and business stakeholders to understand data requirements and specifications. Review test results, perform root cause analysis, and propose solutions to improve data quality and system performance.
Posted 6 days ago
0 years
0 Lacs
Navi Mumbai, Maharashtra, India
On-site
About Us: Planet Spark is reshaping the EdTech landscape by equipping kids and young adults with future-ready skills like public speaking, and more. We're on a mission to spark curiosity, creativity, and confidence in learners worldwide. If you're passionate about meaningful impact, growth, and innovation—you're in the right place. Location: Gurgaon (On-site) Experience Level: Entry to Early Career (Freshers welcome!) Shift Options: Domestic | Middle East | International Working Days: 5 days/week (Wednesday & Thursday off) | Weekend availability required Target Joiners: Any (Bachelor’s or Master’s) 🔥 What You'll Be Owning (Your Impact): Lead Activation: Engage daily with high-intent leads through dynamic channels—calls, video consults, and more. Sales Funnel Pro: Own the full sales journey—from first hello to successful enrollment. Consultative Selling: Host personalized video consultations with parents/adult learners, pitch trial sessions, and resolve concerns with clarity and empathy. Target Slayer: Consistently crush weekly revenue goals and contribute directly to Planet Spark’s growth engine. Client Success: Ensure a smooth onboarding experience and transition for every new learner. Upskill Mindset: Participate in hands-on training, mentorship, and feedback loops to constantly refine your game. 💡 Why Join Sales at Planet Spark? Only Warm Leads: Skip the cold calls—our leads already know us and have completed a demo session. High-Performance Culture: Be part of a fast-paced, energetic team that celebrates success and rewards hustle. Career Fast-Track: Unlock rapid promotions, performance bonuses, and leadership paths. Top-Notch Training: Experience immersive onboarding, live role-plays, and access to ongoing L&D programs. Rewards & Recognition: Weekly shoutouts, cash bonuses, and exclusive events to celebrate your wins. Make Real Impact: Help shape the minds of tomorrow while building a powerhouse career today. 🎯 What You Bring to the Table: Communication Powerhouse: You can build trust and articulate ideas clearly in both spoken and written formats. Sales-Driven: You know how to influence decisions, navigate objections, and close deals with confidence. Empathy First: You genuinely care about clients’ goals and tailor your approach to meet them. Goal-Oriented: You’re self-driven, proactive, and hungry for results. Tech Fluent: Comfortable using CRMs, video platforms, and productivity tools. ✨ What’s in It for You? 💼 High-growth sales career with serious earning potential 🌱 Continuous upskilling in EdTech, sales, and communication 🧘 Supportive culture that values growth and well-being 🎯 Opportunity to work at the cutting edge of education innovation
Posted 6 days ago
0 years
0 Lacs
Vadodara, Gujarat, India
On-site
About Us: Planet Spark is reshaping the EdTech landscape by equipping kids and young adults with future-ready skills like public speaking, and more. We're on a mission to spark curiosity, creativity, and confidence in learners worldwide. If you're passionate about meaningful impact, growth, and innovation—you're in the right place. Location: Gurgaon (On-site) Experience Level: Entry to Early Career (Freshers welcome!) Shift Options: Domestic | Middle East | International Working Days: 5 days/week (Wednesday & Thursday off) | Weekend availability required Target Joiners: Any (Bachelor’s or Master’s) 🔥 What You'll Be Owning (Your Impact): Lead Activation: Engage daily with high-intent leads through dynamic channels—calls, video consults, and more. Sales Funnel Pro: Own the full sales journey—from first hello to successful enrollment. Consultative Selling: Host personalized video consultations with parents/adult learners, pitch trial sessions, and resolve concerns with clarity and empathy. Target Slayer: Consistently crush weekly revenue goals and contribute directly to Planet Spark’s growth engine. Client Success: Ensure a smooth onboarding experience and transition for every new learner. Upskill Mindset: Participate in hands-on training, mentorship, and feedback loops to constantly refine your game. 💡 Why Join Sales at Planet Spark? Only Warm Leads: Skip the cold calls—our leads already know us and have completed a demo session. High-Performance Culture: Be part of a fast-paced, energetic team that celebrates success and rewards hustle. Career Fast-Track: Unlock rapid promotions, performance bonuses, and leadership paths. Top-Notch Training: Experience immersive onboarding, live role-plays, and access to ongoing L&D programs. Rewards & Recognition: Weekly shoutouts, cash bonuses, and exclusive events to celebrate your wins. Make Real Impact: Help shape the minds of tomorrow while building a powerhouse career today. 🎯 What You Bring to the Table: Communication Powerhouse: You can build trust and articulate ideas clearly in both spoken and written formats. Sales-Driven: You know how to influence decisions, navigate objections, and close deals with confidence. Empathy First: You genuinely care about clients’ goals and tailor your approach to meet them. Goal-Oriented: You’re self-driven, proactive, and hungry for results. Tech Fluent: Comfortable using CRMs, video platforms, and productivity tools. ✨ What’s in It for You? 💼 High-growth sales career with serious earning potential 🌱 Continuous upskilling in EdTech, sales, and communication 🧘 Supportive culture that values growth and well-being 🎯 Opportunity to work at the cutting edge of education innovation
Posted 6 days ago
0 years
0 Lacs
Ahmedabad, Gujarat, India
On-site
About Us: Planet Spark is reshaping the EdTech landscape by equipping kids and young adults with future-ready skills like public speaking, and more. We're on a mission to spark curiosity, creativity, and confidence in learners worldwide. If you're passionate about meaningful impact, growth, and innovation—you're in the right place. Location: Gurgaon (On-site) Experience Level: Entry to Early Career (Freshers welcome!) Shift Options: Domestic | Middle East | International Working Days: 5 days/week (Wednesday & Thursday off) | Weekend availability required Target Joiners: Any (Bachelor’s or Master’s) 🔥 What You'll Be Owning (Your Impact): Lead Activation: Engage daily with high-intent leads through dynamic channels—calls, video consults, and more. Sales Funnel Pro: Own the full sales journey—from first hello to successful enrollment. Consultative Selling: Host personalized video consultations with parents/adult learners, pitch trial sessions, and resolve concerns with clarity and empathy. Target Slayer: Consistently crush weekly revenue goals and contribute directly to Planet Spark’s growth engine. Client Success: Ensure a smooth onboarding experience and transition for every new learner. Upskill Mindset: Participate in hands-on training, mentorship, and feedback loops to constantly refine your game. 💡 Why Join Sales at Planet Spark? Only Warm Leads: Skip the cold calls—our leads already know us and have completed a demo session. High-Performance Culture: Be part of a fast-paced, energetic team that celebrates success and rewards hustle. Career Fast-Track: Unlock rapid promotions, performance bonuses, and leadership paths. Top-Notch Training: Experience immersive onboarding, live role-plays, and access to ongoing L&D programs. Rewards & Recognition: Weekly shoutouts, cash bonuses, and exclusive events to celebrate your wins. Make Real Impact: Help shape the minds of tomorrow while building a powerhouse career today. 🎯 What You Bring to the Table: Communication Powerhouse: You can build trust and articulate ideas clearly in both spoken and written formats. Sales-Driven: You know how to influence decisions, navigate objections, and close deals with confidence. Empathy First: You genuinely care about clients’ goals and tailor your approach to meet them. Goal-Oriented: You’re self-driven, proactive, and hungry for results. Tech Fluent: Comfortable using CRMs, video platforms, and productivity tools. ✨ What’s in It for You? 💼 High-growth sales career with serious earning potential 🌱 Continuous upskilling in EdTech, sales, and communication 🧘 Supportive culture that values growth and well-being 🎯 Opportunity to work at the cutting edge of education innovation
Posted 6 days ago
0 years
0 Lacs
Muzaffarpur, Bihar, India
On-site
Company Description Happy Kids Learning - India is an educational channel designed to make learning fun, engaging, and educational for toddlers and young children. We feature videos and pictorial content that help children explore the world through songs, stories, ABCs, numbers, shapes, colors, and much more. Each video is crafted to spark curiosity, develop early skills, and inspire a love of learning. Our goal is to provide playful educational content that introduces early concepts in an entertaining way. Role Description This is a full-time, on-site video editor role located in Muzaffarpur. The Video Editor will be responsible for editing and producing videos, including color grading and motion graphics, to create engaging and educational content for children. Day-to-day tasks include collaborating with the creative team to develop video concepts, editing raw footage, and enhancing videos with color correction and motion graphics. Qualifications \n Skills in Video Production and Video Editing Experience with Video Color Grading and Motion Graphics Graphic design skills Strong attention to detail and creativity Proficiency in video editing software such as Adobe Premiere Pro, Final Cut Pro, or similar Ability to work collaboratively with the creative team Bachelor’s degree in Film Production, Multimedia, Graphic Design, or related field is a plus
Posted 6 days ago
0.0 years
0 Lacs
Gurugram, Haryana
On-site
Position Title: Data Engineer Position Type: Regular - Full-Time Position Location: Gurgaon Requisition ID: 37277 Position Summary Data engineers are mainly responsible for designing, building, managing, and operationalizing data pipelines to support key data and analytics use cases. They play a crucial role in constructing and maintaining a modern, scalable data platform that utilizes the full capabilities of a Lakehouse Platform. You will be a key contributor to our data-driven organization, playing a vital role in both building a modern data platform and maintaining our Enterprise Data Warehouse (EDW). You will leverage your expertise in the Lakehouse Platform to design, develop, and deploy scalable data pipelines using modern and evolving technologies. Simultaneously, you will take ownership of the EDW architecture, ensuring its performance, scalability, and alignment with evolving business needs. Your responsibilities will encompass the full data lifecycle, from ingestion and transformation to delivery of high-quality datasets that empower analytics and decision-making. Duties and responsibilities Build data pipelines using Azure Databricks: Build and maintain scalable data pipelines and workflows within the Lakehouse environment. Transform, cleanse, and aggregate data using Spark SQL or PySpark. Optimize Spark jobs for performance, cost efficiency, and reliability. Develop and manage Lakehouse tables for efficient data storage and versioning. Utilize notebooks for interactive data exploration, analysis, and development. Implement data quality checks and monitoring to ensure accuracy and reliability. Drive Automation: Implement automated data ingestion processes using functionality available in the data platform, optimizing for performance and minimizing manual intervention. Design and implement end-to-end data pipelines, incorporating transformations, data quality checks, and monitoring. Utilize CI/CD tools (Azure DevOps/GitHub Actions) to automate pipeline testing, deployment, and version control. Enterprise Data Warehouse (EDW) Management: Create and maintain data models, schemas, and documentation for the EDW. Collaborate with data analysts, data scientists and business stakeholders to gather requirements, design data marts, and provide support for reporting and analytics initiatives. Troubleshoot and resolve any issues related to data loading, transformation, or access within the EDW. Educate and train: The data engineer should be curious and knowledgeable about new data initiatives and how to address them. This includes applying their data and/or domain understanding in addressing new data requirements. They will also be responsible for proposing appropriate (and innovative) data ingestion, preparation, integration and operationalization techniques in addressing these data requirements. The data engineer will be required to train counterparts in these data pipelining and preparation techniques. Ensure compliance with data governance and security: The data engineer is responsible for ensuring that the data sets provided to users are compliant with established governance and security policies. Data engineers should work with data governance and data security teams while creating new and maintaining existing data pipelines to guarantee alignment and compliance. Qualifications Education Bachelor or master's in computer science, Information Management, Software Engineering, or equivalent work experience. Work Experience At least four years or more of working in data management disciplines including: data integration, modeling, optimization and data quality, and/or other areas directly relevant to data engineering responsibilities and tasks. At least three years of experience working in cross-functional teams and collaborating with business stakeholders in support of a departmental and/or multi-departmental data management and analytics initiative. Technical knowledge, Abilities, and skills Ability to design, build and manage data pipelines for data structures encompassing data transformation, data models, schemas, metadata, and workload management. The ability to work with both IT and business in integrating analytics and data science output into business processes and workflows. Strong knowledge of database programming languages and hands on experience with any RDBMS. McCain Foods is an equal opportunity employer. As a global family-owned company, we strive to be the employer of choice in the diverse communities around the world in which we live and work. We recognize that inclusion drives our creativity, resilience, and success and makes our business stronger. All qualified applicants will receive consideration for employment without regard to race, religion, color, national origin, sex, age, veteran status, disability, or any other protected characteristic under applicable law. McCain is an accessible employer. If you require an accommodation throughout the recruitment process (including alternate formats of materials or accessible meeting rooms), please let us know and we will work with you to find appropriate solutions. Your privacy is important to us. By submitting personal data or information to us, you agree this will be handled in accordance with McCain’s Global Privacy Policy and Global Employee Privacy Policy , as applicable. You can understand how your personal information is being handled here . Job Family: Information Technology Division: Global Digital Technology Department: Global Data and Analytics Location(s): IN - India : Haryana : Gurgaon Company: McCain Foods(India) P Ltd
Posted 6 days ago
3.0 years
0 Lacs
Bengaluru, Karnataka
On-site
Tesco India • Bengaluru, Karnataka, India • Hybrid • Full-Time • Permanent • Apply by 23-Jul-2025 About the role Become a quick learner and be more proactive in understanding the wider Business requirements and linking to the various other concepts in the Domain. Help implement better solutions independently and faster with better ownership. Help automating manual operational tasks and focus on creating reusable assets and propel innovation. Work very closely with team members and able to have healthy relationship Be innovative and able to come up with ideas and reusable components & frameworks Should be ready to Support 24x7 as per the Rota Should be based in Bangalore or is in the process of moving, should be ready to come to Tesco office when asked for. What is in it for you At Tesco, we are committed to providing the best for you. As a result, our colleagues enjoy a unique, differentiated, market- competitive reward package, based on the current industry practices, for all the work they put into serving our customers, communities and planet a little better every day. Our Tesco Rewards framework consists of pillars - Fixed Pay, Incentives, and Benefits. Total Rewards offered at Tesco is determined by four principles -simple, fair, competitive, and sustainable. Salary - Your fixed pay is the guaranteed pay as per your contract of employment. Leave & Time-off - Colleagues are entitled to 30 days of leave (18 days of Earned Leave, 12 days of Casual/Sick Leave) and 10 national and festival holidays, as per the company’s policy. Making Retirement Tension-FreeSalary - In addition to Statutory retirement beneets, Tesco enables colleagues to participate in voluntary programmes like NPS and VPF. Health is Wealth - Tesco promotes programmes that support a culture of health and wellness including insurance for colleagues and their family. Our medical insurance provides coverage for dependents including parents or in-laws. Mental Wellbeing - We offer mental health support through self-help tools, community groups, ally networks, face-to-face counselling, and more for both colleagues and dependents. Financial Wellbeing - Through our financial literacy partner, we offer one-to-one financial coaching at discounted rates, as well as salary advances on earned wages upon request. Save As You Earn (SAYE) - Our SAYE programme allows colleagues to transition from being employees to Tesco shareholders through a structured 3-year savings plan. Physical Wellbeing - Our green campus promotes physical wellbeing with facilities that include a cricket pitch, football field, badminton and volleyball courts, along with indoor games, encouraging a healthier lifestyle. You will be responsible for Should have worked on building large scale distributed systems Should have lead a team in tech lead/module lead role Should have good mentorship experience Should have good communication and very good documentation skills Should show maturity & understand the requirement and convert to high quality technical requirements Should Code and Design end to end data flow and deliver on time Be resilient & flexible to work across multiple teams and internal teams Should help to Implement best practices in Data Architecture and Enterprise Software Development. Should have extensive experience in working in Agile Data Engineering Teams Work very closely with Engineering Manager, TPM, Product Manager and Stake Holders. You will need Basic concepts of Data Engineering, Ingestion from diverse sources and file formats, Hadoop, Data Warehousing, Designing & Implementing large scale Distributed Data platforms & Data Lakes Building distributed platforms or services SQL, Spark, Query Tuning & Performance Optimization General advanced Scala or Java experience (e.g Functional Programming, using Case classes, Complex Data Structures & Algorithms) Experience on SOLID & Dry principles and Good Software Architecture & Design Experience Languages: Python, Java, Scala Good experience in Big Data Unit, System, Integration & Regression Testing Devops experience in Jenkins, Maven ,Github, Artifactory/Jfrog, CI/CD Big Data Processing: Hadoop, Sqoop, Spark and Spark Streaming Hadoop Distributions: Cloudea / Hortonworks experience Data Streaming: Experience on Kafka and Spark Streaming Data Validation & Data Quality Data Lake & Medallion Architecture Shell Scripting & Automation using Ansible or related Configuration management tools Agile processes & tools like Jira & Confluence Code Management tools like Git File formats like ORC, Avro, Parquet, Json & CSV Big Data Orchastration: Nifi, Airflow, Spark on Kubernetes, Yarn, Oozie, Azkaban About us Tesco in Bengaluru is a multi-disciplinary team serving our customers, communities, and planet a little better every day across markets. Our goal is to create a sustainable competitive advantage for Tesco by standardising processes, delivering cost savings, enabling agility through technological solutions, and empowering our colleagues to do even more for our customers. With cross-functional expertise, a wide network of teams, and strong governance, we reduce complexity, thereby offering high-quality services for our customers. Tesco in Bengaluru, established in 2004 to enable standardisation and build centralised capabilities and competencies, makes the experience better for our millions of customers worldwide and simpler for over 3,30,000 colleagues Tesco Technology Today, our Technology team consists of over 5,000 experts spread across the UK, Poland, Hungary, the Czech Republic, and India. In India, our Technology division includes teams dedicated to Engineering, Product, Programme, Service Desk and Operations, Systems Engineering, Security & Capability, Data Science, and other roles. At Tesco, our retail platform comprises a wide array of capabilities, value propositions, and products, essential for crafting exceptional retail experiences for our customers and colleagues across all channels and markets. This platform encompasses all aspects of our operations – from identifying and authenticating customers, managing products, pricing, promoting, enabling customers to discover products, facilitating payment, and ensuring delivery. By developing a comprehensive Retail Platform, we ensure that as customer touchpoints and devices evolve, we can consistently deliver seamless experiences. This adaptability allows us to respond flexibly without the need to overhaul our technology, thanks to the creation of capabilities we have built.
Posted 6 days ago
3.0 years
0 Lacs
Bengaluru, Karnataka
On-site
Tesco India • Bengaluru, Karnataka, India • Hybrid • Full-Time • Permanent • Apply by 24-Jul-2025 About the role As a software development engineer III, you will be responsible for building specific capabilities for the Tesco Dev, an enterprise-grade platform that is expected to host internet-scale workloads in Tesco. In this role, you will have to design and develop reusable solutions by collaborating with the product manager and engineers across India and UK. What is in it for you At Tesco, we are committed to providing the best for you. As a result, our colleagues enjoy a unique, differentiated, market- competitive reward package, based on the current industry practices, for all the work they put into serving our customers, communities and planet a little better every day. Our Tesco Rewards framework consists of pillars - Fixed Pay, Incentives, and Benefits. Total Rewards offered at Tesco is determined by four principles -simple, fair, competitive, and sustainable. Salary - Your fixed pay is the guaranteed pay as per your contract of employment. Leave & Time-off - Colleagues are entitled to 30 days of leave (18 days of Earned Leave, 12 days of Casual/Sick Leave) and 10 national and festival holidays, as per the company’s policy. Making Retirement Tension-FreeSalary - In addition to Statutory retirement beneets, Tesco enables colleagues to participate in voluntary programmes like NPS and VPF. Health is Wealth - Tesco promotes programmes that support a culture of health and wellness including insurance for colleagues and their family. Our medical insurance provides coverage for dependents including parents or in-laws. Mental Wellbeing - We offer mental health support through self-help tools, community groups, ally networks, face-to-face counselling, and more for both colleagues and dependents. Financial Wellbeing - Through our financial literacy partner, we offer one-to-one financial coaching at discounted rates, as well as salary advances on earned wages upon request. Save As You Earn (SAYE) - Our SAYE programme allows colleagues to transition from being employees to Tesco shareholders through a structured 3-year savings plan. Physical Wellbeing - Our green campus promotes physical wellbeing with facilities that include a cricket pitch, football field, badminton and volleyball courts, along with indoor games, encouraging a healthier lifestyle. You will be responsible for Data Engineering relevant skills: Good at exploring data using advanced data analytics techniques Able to identify, explore and recommend possible problems to solve Able to come up with the right design adhering to best practices of data engineering Good understanding of solving time series, forecasting problems Experience in data science, Deep learning is an added advantage You will need Coding & Development Practices Very strong understanding of distributed computing concepts. Very good knowledge on PySpark and dataframe APIs. Good experience on Spark application performance tuning and optimisations. Should have worked on both batch and streaming data processing. Good understanding of Hive and usage with Spark as well as any other open table format like iceberg. Good knowledge on data integration tools like Sqoop. Experience on orchestration tools like oozie, airflow etc. Basic knowledge on shell scripting. Proven ability to write clean code that’s maintainable and extensible (design patterns, OOPs). Proven ability to write unite test cases. Comfortable with Git and GitHub. Experience in cloud platform like MS Azure is an added advantage. Automate everything by default Build a CI/CD pipeline Automate security scanning and performance testing as part of build. Design Able to come up with multiple design solutions and justify the reason for chosen solution. Able to understand the impact on the interface across applications. Able to identify risks and come up with mitigation plan. Able to come up with High and Low-Level Design (HLD) for a given requirement. Should be able to work with stake holders and break requirements into user stories. Able to identify / highlight non-functional requirements and the design should cater for the same. Able to do an optimal design and thereby having minimal changes to the system. Design should cater for the future considerations. Understands Requirement Traceability and ensures that all design components are traced back to requirements. Able to design re-usable and scalable modules. Able to anticipate common exceptions and design defensive mechanisms. Able to optimize the design to achieve maximum performance consuming minimal resources. Considers SLA (Service Level Agreement) and OLA (Organization Level Agreement) before designing Job schedule. Problem Solving Solves problems on their own merit. Clarifies requirements and asserts them in unit tests when necessary. Takes an iterative incremental approach to solving the problem. Is able to communicate and discuss the problem effectively with the team. Able to re-create the problematic scenario and suggest ideal solution. Able to prioritize the problems by understanding the impact. " About us Tesco in Bengaluru is a multi-disciplinary team serving our customers, communities, and planet a little better every day across markets. Our goal is to create a sustainable competitive advantage for Tesco by standardising processes, delivering cost savings, enabling agility through technological solutions, and empowering our colleagues to do even more for our customers. With cross-functional expertise, a wide network of teams, and strong governance, we reduce complexity, thereby offering high-quality services for our customers. Tesco in Bengaluru, established in 2004 to enable standardisation and build centralised capabilities and competencies, makes the experience better for our millions of customers worldwide and simpler for over 3,30,000 colleagues Tesco Technology Today, our Technology team consists of over 5,000 experts spread across the UK, Poland, Hungary, the Czech Republic, and India. In India, our Technology division includes teams dedicated to Engineering, Product, Programme, Service Desk and Operations, Systems Engineering, Security & Capability, Data Science, and other roles. At Tesco, our retail platform comprises a wide array of capabilities, value propositions, and products, essential for crafting exceptional retail experiences for our customers and colleagues across all channels and markets. This platform encompasses all aspects of our operations – from identifying and authenticating customers, managing products, pricing, promoting, enabling customers to discover products, facilitating payment, and ensuring delivery. By developing a comprehensive Retail Platform, we ensure that as customer touchpoints and devices evolve, we can consistently deliver seamless experiences. This adaptability allows us to respond flexibly without the need to overhaul our technology, thanks to the creation of capabilities we have built.
Posted 6 days ago
5.0 years
0 Lacs
Hyderabad, Telangana
On-site
Job Information Date Opened 07/23/2025 Industry Information Technology Job Type Full time Work Experience 5+ years City Hyderabad State/Province Telangana Country India Zip/Postal Code 500039 Job Description Core Responsibilities Design and optimize batch/streaming data pipelines using Scala, Spark, and Kafka Implement real-time tokenization/cleansing microservices in Java Manage production workflows via Apache Airflow (batch scheduling) Conduct root-cause analysis of data incidents using Spark/Dynatrace logs Monitor EMR clusters and optimize performance via YARN/Dynatrace metrics Ensure data security through HashiCorp Vault (Transform Secrets Engine) Validate data integrity and configure alerting systems Requirements Technical Requirements Programming :Scala (Spark batch/streaming), Java (real-time microservices) Big Data Systems: Apache Spark, EMR, HDFS, YARN resource management Cloud & Storage :Amazon S3, EKS Security: HashiCorp Vault, tokenization vs. encryption (FPE) Orchestration :Apache Airflow (batch scheduling) Operational Excellence Spark log analysis, Dynatrace monitoring, incident handling, data validation Mandatory Competencies Expertise in distributed data processing (Spark on EMR/Hadoop) Proficiency in shell scripting and YARN job management Ability to implement format-preserving encryption (tokenization solutions) Experience with production troubleshooting (executor logs, metrics, RCA) Benefits Benefits Insurance - Family Term Insurance PF Paid Time Off - 20 days Holidays - 10 days Flexi timing Competitive Salary Diverse & Inclusive workspace
Posted 6 days ago
7.0 years
0 Lacs
Chennai, Tamil Nadu
On-site
Job Information Date Opened 07/23/2025 Job Type Permanent RSD NO 10371 Industry IT Services Min Experience 15+ Max Experience 15+ City Chennai State/Province Tamil Nadu Country India Zip/Postal Code 600018 Job Description Job Summary: We are seeking a Data Architect to design and implement scalable, secure, and efficient data solutions that support Convey Health Solutions' business objectives. This role will focus on data modeling, cloud data platforms, ETL processes, and analytics solutions, ensuring compliance with healthcare regulations (HIPAA, CMS guidelines). The ideal candidate will collaborate with data engineers, BI analysts, and business stakeholders to drive data-driven decision-making. Key Responsibilities: Enterprise Data Architecture: Design and maintain the overall data architecture to support Convey Health Solutions’ data-driven initiatives. Cloud & Data Warehousing: Architect cloud-based data solutions (AWS, Azure, Snowflake, BigQuery) to optimize scalability, security, and performance. Data Modeling: Develop logical and physical data models for structured and unstructured data, supporting analytics, reporting, and operational processes. ETL & Data Integration: Define strategies for data ingestion, transformation, and integration, leveraging ETL tools like INFORMATICA, TALEND, DBT, or Apache Airflow. Data Governance & Compliance: Ensure data quality, security, and compliance with HIPAA, CMS, and SOC 2 standards. Performance Optimization: Optimize database performance, indexing strategies, and query performance for real-time analytics. Collaboration: Partner with data engineers, software developers, and business teams to align data architecture with business objectives. Technology Innovation: Stay up to date with emerging data technologies, AI/ML applications, and industry trends in healthcare data analytics. Required Qualifications: Education: Bachelor’s or Master’s degree in Computer Science, Information Systems, Data Engineering, or a related field. Experience: 7+ years of experience in data architecture, data engineering, or related roles. Technical Skills: Strong expertise in SQL, NoSQL, and data modeling techniques Hands-on experience with cloud data platforms (AWS Redshift, Snowflake, Google BigQuery, Azure Synapse) Experience with ETL frameworks (INFORMATICA, TALEND, DBT, Apache Airflow, etc.) Knowledge of big data technologies (Spark, Hadoop, Data-bricks) Strong understanding of data security and compliance (HIPAA, CMS, SOC 2, GDPR) Soft Skills: Strong analytical, problem-solving, and communication skills. Ability to work in a collaborative, agile environment. Preferred Qualifications: Experience in healthcare data management, claims processing, risk adjustment, or pharmacy benefit management (PBM). Familiarity with AI/ML applications in healthcare analytics. Certifications in cloud data platforms (AWS Certified Data Analytics, Google Professional Data Engineer, etc.). At Indium diversity, equity, and inclusion (DEI) are the cornerstones of our values. We champion DEI through a dedicated council, expert sessions, and tailored training programs, ensuring an inclusive workplace for all. Our initiatives, including the WE@IN women empowerment program and our DEI calendar, foster a culture of respect and belonging. Recognized with the Human Capital Award, we are committed to creating an environment where every individual thrives. Join us in building a workplace that values diversity and drives innovation.
Posted 6 days ago
0.0 - 4.0 years
0 Lacs
Gurugram, Haryana
On-site
Location Gurugram, Haryana, India Category Corporate Job Id GGN00002148 Safety / Security / Environmental Compliance Job Type Full-Time Posted Date 07/23/2025 Achieving our goals starts with supporting yours. Grow your career, access top-tier health and wellness benefits, build lasting connections with your team and our customers, and travel the world using our extensive route network. Come join us to create what’s next. Let’s define tomorrow, together. Description As an airline, safety is our most important principle. And our Corporate Safety team is responsible for making sure safety is top of mind in every action we take. From conducting flight safety investigations and educating pilots on potential safety threats to implementing medical programs and helping prevent employee injuries, our team is instrumental in running a safe and successful airline for our customers and employees. Job overview and responsibilities Corporate safety is integral for ensuring a safe workplace for our employees and travel experience for our customers. This role is responsible for supporting the development and implementation of a cohesive safety data strategy and supporting the Director of Safety Management Systems (SMS) in growing United’s Corporate Safety Predictive Analytics capabilities. This Senior Analyst will serve as a subject matter expert for corporate safety data analytics and predictive insight strategy and execution. This position will be responsible for supporting new efforts to deliver insightful data analysis and build new key metrics for use by the entire United Safety organization, with the goal of enabling data driven decision making and understanding for corporate safety. The Senior Analyst will be responsible for becoming the subject matter expert in several corporate safety specific data streams and leveraging this expertise to deliver insights which are actionable and allow for a predictive approach to safety risk mitigation. Develop and implement predictive/prescriptive data analytics workflows for Safety Data Management and streamlining processes Collaborate with Digital Technology and United Operational teams to analyze, predict and reduce safety risks and provide measurable solutions Partner with Digital Technology team to develop streamlined and comprehensive data analytics workstreams Support United’s Safety Management System (SMS) with predictive data analytics by designing and developing statistical models Manage and maintain the project portfolio of SMS data team Areas of focus will include, but are not limited to: Predictive and prescriptive analytics Train and validate models Creation and maintenance of standardized corporate safety performance metrics Design and implementation of new data pipelines Delivery of prescriptive analysis insights to internal stakeholders Design and maintain new and existing corporate safety data pipelines and analytical workflows Create and manage new methods for data analysis which provide prescriptive and predictive insights on corporate safety data Partner with US and India based internal partners to establish new data analysis workflows and provide analytical support to corporate and divisional work groups Collaborate with corporate and divisional safety partners to ensure standardization and consistency between all safety analytics efforts enterprise wide Provide support and ongoing subject matter expertise regarding a set of high priority corporate safety datasets and ongoing analytics efforts on those datasets Provide tracking and status update reporting on ongoing assignments, projects, and efforts to US and India based leaders This position is offered on local terms and conditions. Expatriate assignments and sponsorship for employment visas, even on a time-limited visa status, will not be awarded. This position is for United Airlines Business Services Pvt. Ltd - a wholly owned subsidiary of United Airlines Inc. Qualifications What’s needed to succeed (Minimum Qualifications): Bachelor's degree Bachelor's degree in computer science, data science, information sytems, engineering, or another quantitative field (i.e. mathematics, statistics, economics, etc.) 4+ years experience in data analytics, predictive modeling, or statistics Expert level SQL skills Experience with Microsoft SQL Server Management Studio and hands-on experience working with massive data sets Proficiency writing complex code using both traditional and modern technologies/languages (i.e. Python, HTML, Javascript, Power Automate, Spark Node, etc.) for queries, procedures, and analytic processing to create useable data insight Ability to study/understand business needs, then design a data/technology solution that connects business processes with quantifiable outcomes Strong project management and communication skills 3-4 years working with complex data (data analytics, information science, data visualization or other relevant quantitative field Must be legally authorized to work in India for any employer without sponsorship Must be fluent in English (written and spoken) Successful completion of interview required to meet job qualification Reliable, punctual attendance is an essential function of the position What will help you propel from the pack (Preferred Qualifications): Master's degree ML / AI experience Experience with PySpark, Apache, or Hadoop to deal with massive data sets
Posted 6 days ago
10.0 years
0 Lacs
Bengaluru, Karnataka
On-site
Location: Bangalore - Karnataka, India - EOIZ Industrial Area Job Family: Artificial Intelligence & Machine Learning Worker Type Reference: Regular - Permanent Pay Rate Type: Salary Career Level: T4(A) Job ID: R-46721-2025 Description & Requirements Introduction: A Career at HARMAN Technology Services (HTS) We’re a global, multi-disciplinary team that’s putting the innovative power of technology to work and transforming tomorrow. At HARMAN HTS, you solve challenges by creating innovative solutions. Combine the physical and digital, making technology a more dynamic force to solve challenges and serve humanity’s needs Work at the convergence of cross channel UX, cloud, insightful data, IoT and mobility Empower companies to create new digital business models, enter new markets, and improve customer experiences Role : Data Architect with Microsoft Azure + Fabric + Purview Skill Experience Required: 10+ Years Key Responsibilities of the role include: Data Engineer Develop and implement data engineering project including data lakehouse or Big data platform Knowledge of Azure Purview is must Knowledge of Azure Data fabric Ability to define reference data architecture Cloud native data platform experience in Microsoft Data stack including – Azure data factory, Databricks on Azure Knowledge about latest data trends including datafabric and data mesh Robust knowledge of ETL and data transformation and data standardization approaches Key contributor on growth of the COE and influencing client revenues through Data and analytics solutions Lead the selection, deployment, and management of Data tools, platforms, and infrastructure. Ability to guide technically a team of data engineers Oversee the design, development, and deployment of Data solutions Define, differentiate & strategize new Data services/offerings and create reference architecture assets Drive partnerships with vendors on collaboration, capability building, go to market strategies, etc. Guide and inspire the organization about the business potential and opportunities around Data Network with domain experts Collaborate with client teams to understand their business challenges and needs. Develop and propose Data solutions tailored to client specific requirements. Influence client revenues through innovative solutions and thought leadership. Lead client engagements from project initiation to deployment. Build and maintain strong relationships with key clients and stakeholders. Build re-usable Methodologies, Pipelines & Models Create data pipelines for more efficient and repeatable data science projects Design and implement data architecture solutions that support business requirements and meet organizational needs Collaborate with stakeholders to identify data requirements and develop data models and data flow diagrams Work with cross-functional teams to ensure that data is integrated, transformed, and loaded effectively across different platforms and systems Develop and implement data governance policies and procedures to ensure that data is managed securely and efficiently Develop and maintain a deep understanding of data platforms, technologies, and tools, and evaluate new technologies and solutions to improve data management processes Ensure compliance with regulatory and industry standards for data management and security. Develop and maintain data models, data warehouses, data lakes and data marts to support data analysis and reporting. Ensure data quality, accuracy, and consistency across all data sources. Knowledge of ETL and data integration tools such as Informatica, Qlik Talend, and Apache NiFi. Experience with data modeling and design tools such as ERwin, PowerDesigner, or ER/Studio Knowledge of data governance, data quality, and data security best practices Experience with cloud computing platforms such as AWS, Azure, or Google Cloud Platform. Familiarity with programming languages such as Python, Java, or Scala. Experience with data visualization tools such as Tableau, Power BI, or QlikView. Understanding of analytics and machine learning concepts and tools. Knowledge of project management methodologies and tools to manage and deliver complex data projects. Skilled in using relational database technologies such as MySQL, PostgreSQL, and Oracle, as well as NoSQL databases such as MongoDB and Cassandra. Strong expertise in cloud-based databases such as Azure datalake , Synapse, Azure data factory and AWS glue , AWS Redshift and Azure SQL. Knowledge of big data technologies such as Hadoop, Spark, snowflake, databricks , and Kafka to process and analyze large volumes of data. Proficient in data integration techniques to combine data from various sources into a centralized location. Strong data modeling, data warehousing, and data integration skills. People & Interpersonal Skills Build and manage a high-performing team of Data engineers and other specialists. Foster a culture of innovation and collaboration within the Data team and across the organization. Demonstrate the ability to work in diverse, cross-functional teams in a dynamic business environment. Candidates should be confident, energetic self-starters, with strong communication skills. Candidates should exhibit superior presentation skills and the ability to present compelling solutions which guide and inspire. Provide technical guidance and mentorship to the Data team Collaborate with other stakeholders across the company to align the vision and goals Communicate and present the Data capabilities and achievements to clients and partners Stay updated on the latest trends and developments in the Data domain What is required for the role? 10+ years of experience in the information technology industry with strong focus on Data engineering, architecture and preferably as Azure data engineering lead 8+ years of data engineering or data architecture experience in successfully launching, planning, and executing advanced data projects. Data Governance experience is mandatory MS Fabric Certified Experience in working on RFP/ proposals, presales activities, business development and overlooking delivery of Data projects is highly desired Educational Qualification: A master’s or bachelor’s degree in computer science, data science, information systems, operations research, statistics, applied mathematics, economics, engineering, or physics. Candidate should have demonstrated the ability to manage data projects and diverse teams. Should have experience in creating data and analytics solutions. Experience in building solutions with Data solutions in any one or more domains – Industrial, Healthcare, Retail, Communication Problem-solving, communication, and collaboration skills. Good knowledge of data visualization and reporting tools Ability to normalize and standardize data as per Key KPIs and Metrics Benefits: Opportunities for professional growth and development. Collaborative and supportive work environment. What We Offer Access to employee discounts on world class HARMAN/Samsung products (JBL, Harman Kardon, AKG etc.) Professional development opportunities through HARMAN University’s business and leadership academies. An inclusive and diverse work environment that fosters and encourages professional and personal development. You Belong Here HARMAN is committed to making every employee feel welcomed, valued, and empowered. No matter what role you play, we encourage you to share your ideas, voice your distinct perspective, and bring your whole self with you – all within a support-minded culture that celebrates what makes each of us unique. We also recognize that learning is a lifelong pursuit and want you to flourish. We proudly offer added opportunities for training, development, and continuing education, further empowering you to live the career you want. About HARMAN: Where Innovation Unleashes Next-Level Technology Ever since the 1920s, we’ve been amplifying the sense of sound. Today, that legacy endures, with integrated technology platforms that make the world smarter, safer, and more connected. Across automotive, lifestyle, and digital transformation solutions, we create innovative technologies that turn ordinary moments into extraordinary experiences. Our renowned automotive and lifestyle solutions can be found everywhere, from the music we play in our cars and homes to venues that feature today’s most sought-after performers, while our digital transformation solutions serve humanity by addressing the world’s ever-evolving needs and demands. Marketing our award-winning portfolio under 16 iconic brands, such as JBL, Mark Levinson, and Revel, we set ourselves apart by exceeding the highest engineering and design standards for our customers, our partners and each other. If you’re ready to innovate and do work that makes a lasting impact, join our talent community today! You Belong Here HARMAN is committed to making every employee feel welcomed, valued, and empowered. No matter what role you play, we encourage you to share your ideas, voice your distinct perspective, and bring your whole self with you – all within a support-minded culture that celebrates what makes each of us unique. We also recognize that learning is a lifelong pursuit and want you to flourish. We proudly offer added opportunities for training, development, and continuing education, further empowering you to live the career you want. About HARMAN: Where Innovation Unleashes Next-Level Technology Ever since the 1920s, we’ve been amplifying the sense of sound. Today, that legacy endures, with integrated technology platforms that make the world smarter, safer, and more connected. Across automotive, lifestyle, and digital transformation solutions, we create innovative technologies that turn ordinary moments into extraordinary experiences. Our renowned automotive and lifestyle solutions can be found everywhere, from the music we play in our cars and homes to venues that feature today’s most sought-after performers, while our digital transformation solutions serve humanity by addressing the world’s ever-evolving needs and demands. Marketing our award-winning portfolio under 16 iconic brands, such as JBL, Mark Levinson, and Revel, we set ourselves apart by exceeding the highest engineering and design standards for our customers, our partners and each other. If you’re ready to innovate and do work that makes a lasting impact, join our talent community today! Important Notice: Recruitment Scams Please be aware that HARMAN recruiters will always communicate with you from an '@harman.com' email address. We will never ask for payments, banking, credit card, personal financial information or access to your LinkedIn/email account during the screening, interview, or recruitment process. If you are asked for such information or receive communication from an email address not ending in '@harman.com' about a job with HARMAN, please cease communication immediately and report the incident to us through: harmancareers@harman.com. HARMAN is proud to be an Equal Opportunity employer. HARMAN strives to hire the best qualified candidates and is committed to building a workforce representative of the diverse marketplaces and communities of our global colleagues and customers. All qualified applicants will receive consideration for employment without regard to race, religion, color, national origin, gender (including pregnancy, childbirth, or related medical conditions), sexual orientation, gender identity, gender expression, age, status as a protected veteran, status as an individual with a disability, or other applicable legally protected characteristics.HARMAN attracts, hires, and develops employees based on merit, qualifications and job-related performance.(www.harman.com)
Posted 6 days ago
0.0 - 8.0 years
0 Lacs
Bengaluru, Karnataka
On-site
Bengaluru, Karnataka, India Qualification : Strong experience working with the Apache Spark framework, including a solid grasp of core concepts, performance optimizations, and industry best practices Proficient in PySpark with hands-on coding experience; familiarity with unit testing, object-oriented programming (OOP) principles, and software design patterns Experience with code deployment and associated processes Proven ability to write complex SQL queries to extract business-critical insights Hands-on experience in streaming data processing Familiarity with machine learning concepts is an added advantage Experience with NoSQL databases Good understanding of Test-Driven Development (TDD) methodologies Demonstrated flexibility and eagerness to learn new technologies Skills Required : Bigdata, Pyspark, Python Role : Design and implement solutions for problems arising out of large-scale data processing Attend/drive various architectural, design and status calls with multiple stakeholders Ensure end-to-end ownership of all tasks being aligned including development, testing, deployment and support Design, build & maintain efficient, reusable & reliable code Test implementation, troubleshoot & correct problems Capable of working as an individual contributor and within team too Ensure high quality software development with complete documentation and traceability Fulfil organizational responsibilities (sharing knowledge & experience with other teams/ groups) Experience : 5 to 8 years Job Reference Number : 13207
Posted 6 days ago
8.0 years
0 Lacs
Delhi, Delhi
On-site
Delhi, Delhi, India Department Backend Development Job posted on Jul 23, 2025 Employee Type Permanent Experience range (Years) 0 - 0 About the Role We are looking for a seasoned Engineering Manager to lead the development of our internal Risk, Fraud and Operations Platform . This platform plays a critical role in ensuring smooth business operations, detecting anomalies, managing fraud workflows, and supporting internal teams with real-time visibility and control. As an Engineering Manager, you’llbe responsible for leading a cross-functional team of backend engineers working on high-throughput systems, real-time data pipelines, and internal tools that power operational intelligence and risk management. You will own delivery, architecture decisions, team growth, and collaboration with stakeholders. Key Responsibilities Lead and grow a team of software engineers building internal risk and ops platforms. Oversee the design and development of scalable microservices and real-time data pipelines. Collaborate with stakeholders from Risk, Ops, and Product to define technical roadmaps and translate them into delivery plans. Ensure high system reliability, data accuracy, and low-latency access to risk signals and ops dashboards. Drive architectural decisions, code quality, testing, and deployment best practices. Contribute to hands-on design, reviews, and occasional coding when required. Optimize performance and cost-efficiency of services deployed on AWS. Mentor team members and foster a culture of ownership, innovation, and continuous learning. Tech Stack You'll Work With Languages: Node.js, Python, Java Data & Messaging: Kafka, OpenSearch, MongoDB, MySQL , Apache Spark , Apache Flink , Apache Druid Architecture: Microservices, REST APIs Infrastructure: AWS (EC2, ECS/EKS, Lambda, RDS, CI/CDetc.) Requirements 8+ years of software engineering experience with backend and distributed systems. 2+ years of people management or tech leadership experience. Strong experience with Node.js and Python ; familiarity with Java is a plus. Hands-on experience with event-driven architecture using Kafka or similar. Exposure to OpenSearch , MongoDB , and relational databases like MySQL . Exposure to Spark, Flink , Data pipeline ETL Deep understanding of cloud-native architecture and services on AWS . Proven ability to manage timelines, deliver features, and drive cross-functional execution. Strong communication and stakeholder management skills. Preferred Qualifications Prior experience in risk, fraud detection, operations tooling , or internal platforms . Experience with observability, alerting, and anomaly detection systems. Comfortable working in fast-paced environments with rapidly evolving requirements.
Posted 6 days ago
5.0 years
0 Lacs
Ahmedabad, Gujarat
On-site
Data Engineer Ahmedabad, India; Hyderabad, India Information Technology 315442 Job Description About The Role: Grade Level (for internal use): 10 The Team : The Data Engineering team is responsible for architecting, building, and maintaining our evolving data infrastructure, as well as curating and governing the data assets created on our platform. We work closely with various stakeholders to acquire, process, and refine vast datasets, focusing on creating scalable and optimized data pipelines. Our team possesses broad expertise in critical data domains, technology stacks, and architectural patterns. We foster knowledge sharing and collaboration, resulting in a unified strategy and seamless data management. The Impact: This role is the foundation of the products delivered. The data onboarded is the base for the company as it feeds into the products, platforms, and essential for supporting our advanced analytics and machine learning initiatives. What’s in it for you: Be the part of a successful team which works on delivering top priority projects which will directly contribute to Company’s strategy. Drive the testing initiatives including supporting Automation strategy, performance, and security testing. This is the place to enhance your Testing skills while adding value to the business. As an experienced member of the team, you will have the opportunity to own and drive a project end to end and collaborate with developers, business analysts and product managers who are experts in their domain which can help you to build multiple skillsets. Responsibilities: Design, develop, and maintain scalable and efficient data pipelines to process large volumes of data. To implement ETL processes to acquire, validate, and process incoming data from diverse sources. Collaborate with cross-functional teams, including data scientists, analysts, and software engineers, to understand data requirements and translate them into technical solutions. Implement data ingestion, transformation, and integration processes to ensure data quality, accuracy, and consistency. Optimize Spark jobs and data processing workflows for performance, scalability, and reliability. Troubleshoot and resolve issues related to data pipelines, data processing, and performance bottlenecks. Conduct code reviews and provide constructive feedback to junior team members to ensure code quality and best practices adherence. Stay updated with the latest advancements in Spark and related technologies and evaluate their potential for enhancing existing data engineering processes. Develop and maintain documentation, including technical specifications, data models, and system architecture diagrams. Stay abreast of emerging trends and technologies in the data engineering and big data space and propose innovative solutions to enhance data processing capabilities. What We’re Looking For: 5+ Years of experience in Data Engineering or related field Strong experience in Python programming with expertise in building data-intensive applications. Proven hands-on experience with Apache Spark, including Spark Core, Spark SQL, Spark Streaming, and Spark MLlib. Solid understanding of distributed computing concepts, parallel processing, and cluster computing frameworks. Proficiency in data modeling, data warehousing, and ETL techniques. Experience with workflow management platforms, preferably Airflow. Familiarity with big data technologies such as Hadoop, Hive, or HBase. Strong Knowledge of SQL and experience with relational databases. Hand on experience with AWS cloud data platform Strong problem-solving and troubleshooting skills, with the ability to analyze complex data engineering issues and provide effective solutions. Excellent communication and collaboration skills, with the ability to work effectively in cross-functional teams. Nice to have experience on DataBricks Preferred Qualifications: Bachelor’s degree in Information Technology, Computer Information Systems, Computer Engineering, Computer Science, or other technical discipline What’s In It For You? Our Purpose: Progress is not a self-starter. It requires a catalyst to be set in motion. Information, imagination, people, technology–the right combination can unlock possibility and change the world. Our world is in transition and getting more complex by the day. We push past expected observations and seek out new levels of understanding so that we can help companies, governments and individuals make an impact on tomorrow. At S&P Global we transform data into Essential Intelligence®, pinpointing risks and opening possibilities. We Accelerate Progress. Our People: We're more than 35,000 strong worldwide—so we're able to understand nuances while having a broad perspective. Our team is driven by curiosity and a shared belief that Essential Intelligence can help build a more prosperous future for us all. From finding new ways to measure sustainability to analyzing energy transition across the supply chain to building workflow solutions that make it easy to tap into insight and apply it. We are changing the way people see things and empowering them to make an impact on the world we live in. We’re committed to a more equitable future and to helping our customers find new, sustainable ways of doing business. We’re constantly seeking new solutions that have progress in mind. Join us and help create the critical insights that truly make a difference. Our Values: Integrity, Discovery, Partnership At S&P Global, we focus on Powering Global Markets. Throughout our history, the world's leading organizations have relied on us for the Essential Intelligence they need to make confident decisions about the road ahead. We start with a foundation of integrity in all we do, bring a spirit of discovery to our work, and collaborate in close partnership with each other and our customers to achieve shared goals. Benefits: We take care of you, so you can take care of business. We care about our people. That’s why we provide everything you—and your career—need to thrive at S&P Global. Our benefits include: Health & Wellness: Health care coverage designed for the mind and body. Flexible Downtime: Generous time off helps keep you energized for your time on. Continuous Learning: Access a wealth of resources to grow your career and learn valuable new skills. Invest in Your Future: Secure your financial future through competitive pay, retirement planning, a continuing education program with a company-matched student loan contribution, and financial wellness programs. Family Friendly Perks: It’s not just about you. S&P Global has perks for your partners and little ones, too, with some best-in class benefits for families. Beyond the Basics: From retail discounts to referral incentive awards—small perks can make a big difference. For more information on benefits by country visit: https://spgbenefits.com/benefit-summaries Global Hiring and Opportunity at S&P Global: At S&P Global, we are committed to fostering a connected and engaged workplace where all individuals have access to opportunities based on their skills, experience, and contributions. Our hiring practices emphasize fairness, transparency, and merit, ensuring that we attract and retain top talent. By valuing different perspectives and promoting a culture of respect and collaboration, we drive innovation and power global markets. Recruitment Fraud Alert: If you receive an email from a spglobalind.com domain or any other regionally based domains, it is a scam and should be reported to reportfraud@spglobal.com. S&P Global never requires any candidate to pay money for job applications, interviews, offer letters, “pre-employment training” or for equipment/delivery of equipment. Stay informed and protect yourself from recruitment fraud by reviewing our guidelines, fraudulent domains, and how to report suspicious activity here. - Equal Opportunity Employer S&P Global is an equal opportunity employer and all qualified candidates will receive consideration for employment without regard to race/ethnicity, color, religion, sex, sexual orientation, gender identity, national origin, age, disability, marital status, military veteran status, unemployment status, or any other status protected by law. Only electronic job submissions will be considered for employment. If you need an accommodation during the application process due to a disability, please send an email to: EEO.Compliance@spglobal.com and your request will be forwarded to the appropriate person. US Candidates Only: The EEO is the Law Poster http://www.dol.gov/ofccp/regs/compliance/posters/pdf/eeopost.pdf describes discrimination protections under federal law. Pay Transparency Nondiscrimination Provision - https://www.dol.gov/sites/dolgov/files/ofccp/pdf/pay-transp_%20English_formattedESQA508c.pdf - IFTECH202.1 - Middle Professional Tier I (EEO Job Group) Job ID: 315442 Posted On: 2025-07-23 Location: Ahmedabad, Gujarat, India
Posted 6 days ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
32455 Jobs | Dublin
Wipro
16590 Jobs | Bengaluru
EY
11025 Jobs | London
Accenture in India
10991 Jobs | Dublin 2
Amazon
8878 Jobs | Seattle,WA
Uplers
8715 Jobs | Ahmedabad
IBM
8204 Jobs | Armonk
Oracle
7750 Jobs | Redwood City
Capgemini
6181 Jobs | Paris,France
Muthoot FinCorp (MFL)
6170 Jobs | New Delhi