Jobs
Interviews

16161 Spark Jobs - Page 37

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

0 years

0 Lacs

Gurgaon, Haryana, India

On-site

Position Title: Data Engineer Position Type: Regular - Full-Time Position Location: Gurgaon Requisition ID: 37277 Position Summary Data engineers are mainly responsible for designing, building, managing, and operationalizing data pipelines to support key data and analytics use cases. They play a crucial role in constructing and maintaining a modern, scalable data platform that utilizes the full capabilities of a Lakehouse Platform. You will be a key contributor to our data-driven organization, playing a vital role in both building a modern data platform and maintaining our Enterprise Data Warehouse (EDW). You will leverage your expertise in the Lakehouse Platform to design, develop, and deploy scalable data pipelines using modern and evolving technologies. Simultaneously, you will take ownership of the EDW architecture, ensuring its performance, scalability, and alignment with evolving business needs. Your responsibilities will encompass the full data lifecycle, from ingestion and transformation to delivery of high-quality datasets that empower analytics and decision-making. Duties And Responsibilities Build data pipelines using Azure Databricks: Build and maintain scalable data pipelines and workflows within the Lakehouse environment. Transform, cleanse, and aggregate data using Spark SQL or PySpark. Optimize Spark jobs for performance, cost efficiency, and reliability. Develop and manage Lakehouse tables for efficient data storage and versioning. Utilize notebooks for interactive data exploration, analysis, and development. Implement data quality checks and monitoring to ensure accuracy and reliability. Drive Automation Implement automated data ingestion processes using functionality available in the data platform, optimizing for performance and minimizing manual intervention. Design and implement end-to-end data pipelines, incorporating transformations, data quality checks, and monitoring. Utilize CI/CD tools (Azure DevOps/GitHub Actions) to automate pipeline testing, deployment, and version control. Enterprise Data Warehouse (EDW) Management Create and maintain data models, schemas, and documentation for the EDW. Collaborate with data analysts, data scientists and business stakeholders to gather requirements, design data marts, and provide support for reporting and analytics initiatives. Troubleshoot and resolve any issues related to data loading, transformation, or access within the EDW. Educate and train: The data engineer should be curious and knowledgeable about new data initiatives and how to address them. This includes applying their data and/or domain understanding in addressing new data requirements. They will also be responsible for proposing appropriate (and innovative) data ingestion, preparation, integration and operationalization techniques in addressing these data requirements. The data engineer will be required to train counterparts in these data pipelining and preparation techniques. Ensure compliance with data governance and security: The data engineer is responsible for ensuring that the data sets provided to users are compliant with established governance and security policies. Data engineers should work with data governance and data security teams while creating new and maintaining existing data pipelines to guarantee alignment and compliance. Qualifications Education Bachelor or master's in computer science, Information Management, Software Engineering, or equivalent work experience. Work Experience At least four years or more of working in data management disciplines including: data integration, modeling, optimization and data quality, and/or other areas directly relevant to data engineering responsibilities and tasks. At least three years of experience working in cross-functional teams and collaborating with business stakeholders in support of a departmental and/or multi-departmental data management and analytics initiative. Technical Knowledge, Abilities, And Skills Ability to design, build and manage data pipelines for data structures encompassing data transformation, data models, schemas, metadata, and workload management. The ability to work with both IT and business in integrating analytics and data science output into business processes and workflows. Strong knowledge of database programming languages and hands on experience with any RDBMS. McCain Foods is an equal opportunity employer. As a global family-owned company, we strive to be the employer of choice in the diverse communities around the world in which we live and work. We recognize that inclusion drives our creativity, resilience, and success and makes our business stronger. All qualified applicants will receive consideration for employment without regard to race, religion, color, national origin, sex, age, veteran status, disability, or any other protected characteristic under applicable law. McCain is an accessible employer. If you require an accommodation throughout the recruitment process (including alternate formats of materials or accessible meeting rooms), please let us know and we will work with you to find appropriate solutions. Your privacy is important to us. By submitting personal data or information to us, you agree this will be handled in accordance with McCain’s Global Privacy Policy and Global Employee Privacy Policy , as applicable. You can understand how your personal information is being handled here . Job Family: Information Technology Division: Global Digital Technology Department: Global Data and Analytics Location(s): IN - India : Haryana : Gurgaon Company: McCain Foods(India) P Ltd

Posted 5 days ago

Apply

5.0 years

0 Lacs

Gurgaon, Haryana, India

On-site

Role : Administrative Assistant Shift : US (CST timings 5 PM to 2 AM) Location : Onsite/ Gurgaon/ Full time Who We Are: This is Spearhead Technology — where every challenge is an opportunity, and every solution is a masterpiece in the making. As a full-lifecycle IT company, we transcend mere delivery; we engineer success. From inception to implementation, our seasoned expertise shepherds every phase of the journey. Be it planning, analysis, design, development, testing, or the seamless transition to production, we stand as steadfast partners in our clients’ progress. At Spearhead Technology, quality isn't a mere aspiration—it's our ethos. Rooted in Tech Advisory, our methodology is guided by insights that spark transformative outcomes. We recognize the paramount importance of talent retention. Through a steadfast commitment to work-life balance, competitive remuneration packages, and an optimized operational model, we ensure our team remains as exceptional as our services. Step into Spearhead Technology, where innovation meets precision, and together, let's sculpt the future of technology with finesse and distinction. Requirements Spearhead Technology is looking for a highly organized, confident, and proactive Administrative Assistant to join our team at the Gurugram office. This is not a traditional admin role, we are seeking someone with exceptional communication skills and strong presence, who can seamlessly bridge communication between the senior leadership and all levels of the organization. The ideal candidate will take charge of administrative coordination, internal reporting, and relationship management across the hierarchy. You will serve as a key support to the leadership team and act as the primary point of contact for ensuring internal operational coordination and alignment Key Responsibilities: Act as a liaison between the President, CEO, and cross-functional teams across the organization. Take regular reporting and updates from all levels of the company hierarchy, from entry-level teams to department heads. Establish strong working relationships across departments to ensure effective follow-ups, feedback loops, and task closures. Manage executive calendars, schedule meetings, and coordinate appointments with precision and foresight. Draft and manage internal and external communications, reports, minutes of meetings, and presentations. Track progress on key leadership action items, deadlines, and deliverables. Coordinate and support team events, business reviews, and internal meets. Assist in basic office administration, travel bookings, and vendor coordination as required. Requirements: Graduate in any discipline (Bachelor’s degree required). 3–5 years of experience as an Administrative Assistant, Executive Assistant, or in a similar administrative or coordination role. Excellent communication skills, fluent, clear, and confident in both verbal and written English. Strong interpersonal skills with the ability to build rapport and collaborate across various departments and levels of seniority. Self-driven, assertive, and able to manage priorities in a fast-paced work environment. Highly organized with strong attention to detail and follow-through. Proficient in MS Office Suite (Word, Excel, PowerPoint, Outlook) and digital collaboration platforms (Zoom, MS Teams, etc.). High level of discretion, integrity, and professionalism in handling confidential information. Benefits What’s in it for you: At Spearhead Technology, we prioritize your well-being and professional growth. Here's what you can expect: Achieve a healthy work-life balance. Competitive compensation and abundant growth opportunities. Enjoy a standard 5-day workweek with 2 fixed weekly offs. Experience an employee-centric environment with supportive policies. Benefit from family-friendly and flexible work arrangements. Access our Performance Advancement and Career Enhancement (PACE) initiative and discover opportunities for both personal and professional growth. From tailored career development plans to expert counseling services, PACE empowers you to chart your course to success with confidence and clarity. Elevate your career trajectory with our Learning & Development (L&D) program. Join our team and embark on a transformative journey of upskilling and self-discovery. With continuous learning as your compass, you'll not only enhance your expertise but also open doors to new opportunities, paving the way for career growth and fulfillment. Please note : At Spearhead Technology, we value the importance of collaboration, learning, and fostering connections with clients, peers, leaders, and communities. While some in-person engagement may be required for certain roles, we are committed to providing flexibility to accommodate your individual work-life balance needs. As an equal opportunities’ employer, Spearhead Technology welcomes and encourages applications from all members of society. We are dedicated to creating an inclusive environment where diversity is celebrated, and individuals are valued for their unique perspectives and contributions. We do not discriminate on the basis of race, religion or belief, ethnicity, disability, age, citizenship, marital or civil partnership status, sexual orientation, or gender identity.

Posted 5 days ago

Apply

2.0 - 4.0 years

3 - 7 Lacs

Nellore

Work from Office

Full Time Role at EssentiallySports for Data Growth EngineerEssentiallySports is the home for the underserved fan, delivering storytelling that goes beyond the headlines. As a media platform, we combine deep audience insights with cultural trends, to meet fandom where it lives and where it goes next. ValuesFocus on the user and all else will followHire for Intent and not for ExperienceBootstrapping gives you the freedom to serve the customer and the team instead of investorInternet and Technology untap the nichesAction oriented, integrity, freedom, strong communicators, and responsibilityAll things equal, one with high agency winsEssentiallySports is a top 10 sports media platform in the U. S. , generating over a billion pageviews a year and 30m+ monthly active users per month. This massive traffic fuels our data-driven culture, allowing us to build owned audiences at scale through organic growth—a model we take pride in, with zero CAC. The next phase of ES growth is around newsletter initiative, in less than 9 months, we’ve built a robust newsletter brand with 700,000+ highly engaged readers and impressive performance metrics:5 newsletter brands700k+ subscribersOpen rates of 40%-46%. The role is for a data engineer with growth and business acumen, in the “permissionless growth” team. Someone who can connect the pipelines of millions of users, but at the same time knit a story of the how and why. ResponsibilitiesOwning Data Pipeline from Web to Athena to Email, end-to-endYou’ll make the key decisions and see them through to successful user sign upUse Data Science to find real insights, which translates to user engagementPushing changes every week dayPersonalization at Scale: Leverage fan behavior data to tailor content and improve lifetime value. Who are you?2+ years of professional data engineering experienceSomeone who spends time thinking about business insights as much as they do on engineeringIs a self-starter, and drives initiativesIs excited to pick up AI, and integrate it at various touch pointsYou have strong experience in data analysis, growth marketing, or audience development (media or newsletters? Even better). Have an awareness about Athena, Glue, Jupyter, or intent to pick them upYou’re comfortable working with tools like Google Analytics, SQL, email marketing platforms (Beehiiv is a plus), and data visualization tools. Collaborative and want to see the team succeed in its goalsProblem solving, proactive and solution oriented mindset, to spot opportunities and translate into real growthAbility to thrive in startups with a fast-paced environment and take ownership for working through ambiguityExcited to join a lean team in a big company that moves quickly

Posted 5 days ago

Apply

4.0 - 7.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Job Description- We are committed to creating a workplace for the industry’s best talent. The Smart Cube (A WNS Company) is proud to be certified as a ‘Great Place to Work’ for the fifth year running. The Smart Cube is also recognized by Great Place to Work as One of India’s Best Workplaces for Women 2021. The Smart Cube, a global provider of strategic research and analytics solutions, has been rated on Analytics India Magazine’s (AIM) Penetration and Maturity Quadrant of Top Data Science Providers as a “Seasoned Vendor” 2022 report amongst the leading analytics service providers based out of India. We are listed in top 50 data science organization The Smart Cube shortlisted for two awards at the British Data Awards. Our clients include a third of the companies in the FTSE and Fortune 100, primarily in the CPG, Life Sciences, Energy, Chemicals, Industrials, Financial Services, Professional Services, and Retail sectors. Roles and responsibilities Specifically, Assistant Managers should – Understand the client objectives, and work with the Project Lead (PL) to design the analytical solution/framework. Be able to translate the client objectives / analytical plan into clear deliverables with associated priorities and constraints Organize/Prepare/Manage data and conduct quality checks to ensure that the analysis dataset is ready Explore and implement various statistical and analytical techniques (including machine learning) like linear/non-linear Regression, Decision Trees, Segmentation, time series forecasting as well as machine learning algorithms like Random Forest, SVM, ANN, etc. Conduct sanity checks of the analysis output based on reasoning and common sense, and be able to do a rigorous self QC, as well as of the work assigned to junior analysts to ensure an error free output Interpret the output in context of the client’s business and industry to identify trends and actionable insights Be able to take client calls relatively independently, and interact with onsite leads (if applicable) on a daily basis Discuss queries/certain sections of deliverable report over client calls or video conferences Oversee the entire project lifecycle, from initiation to closure, ensuring timely and within-budget delivery. Collaborate with stakeholders to gather and refine business requirements, translating them into technical specifications. Manage a team of data analysts and developers, providing guidance, mentorship, and performance evaluations. Ensure data integrity and accuracy through rigorous data validation and quality checks. Facilitate effective communication between technical teams and business stakeholders to align project goals and expectations. Drive continuous improvement initiatives to enhance data analytics processes and methodologies. Act as a project lead, coordinating cross-functional teams and managing project timelines and deliverables. Client Management Act as client lead and maintain client relationship; make independent key decisions related to client management Be a part of deliverable discussions with clients over telephonic calls, and guide the project team on the next steps and way forward Technical Requirements: Knowledge of how to connect Database with Knime e.g. snowflake, SQL db etc. along with SQL concepts like types of joins/union of data etc. Read data from a DB and write it back to a database Working of macros to avoid repetition of task, and enabling schedulers to run work flow(s) Design and develop ETL workflows and datasets in Knime to be used by the BI Reporting tool Perform end to end Data validation and prepare technical specifications and documentation for Knime workflows supporting BI reports. Develop and maintain interactive dashboards and reports using PowerBI to support data-driven decision-making. Lead and manage data analytics projects utilizing PowerBI, Python, and SQL to guide & deliver actionable business insights. Be able to succinctly visualize the findings through a PPT, a BI dashboard (Tableau, Qlikview, etc.) and highlight the key takeaways from a business perspective Ideal Candidate 4-7 years of relevant advanced analytics experience in Marketing, CRM, Pricing in either Retail, or CPG industries. Other B2C domains can be considered Experience in managing, cleaning and analyzing large datasets using tools like Python, R or SAS Experience in using multiple advanced analytics techniques or machine learning algorithms Experience in handling client calls and working independently with clients Understanding of consumer businesses such as Retail, CPG or Telecom Knowledge of working across multiple data types and files like flat files, RDBMS files; Knime workflows, Knime server, and multiple data platforms (SQL Server, Teradata, Hadoop, Spark); on premise or on the cloud Basic knowledge of advanced statistical techniques like Decision trees, different types of regressions, clustering, Forecasting (ARIMA/X), ML, etc. Other Skills Excellent communication skills (both written and oral) Ability to create client ready deliverables in Excel and PowerPoint Optimization techniques (linear, non-linear), and knowledge of supply chain VBA, Excel Macro programming, Tableau, QlikView Education Engineers from top tier institutes (IITs, DCE/NSIT, NITs) or Post Graduates in Maths/Statistics/OR from top Tier Colleges/Universities MBA from top tier B-schools In interested, please share your updated CV on kiran.meghani@wns.com or apply on https://smrtr.io/sz4-S Looking for immediate OR early joiners

Posted 5 days ago

Apply

7.0 - 12.0 years

8 - 12 Lacs

Bengaluru

Work from Office

Project description Surveillance Enhancement Program (SEP) is a multi-year program (MYP) to build and enhance the surveillance platform to detect potential instances of market misconduct, market manipulation and market abuse. The Bank met its commitment to FCA (Financial Conduct Authority, FCA) to meet Market Abuse Regulation ("MAR") requirements for critical data sources in September 2017. Following that, subsequent projects were initiated to further enhance, expand and complement the current coverage of the automated surveillance platform. This project will focus on Alerts enhancements, Strategic data sourcing from FM order and trade lakes, UI / workflow enhancements to meet regulatory obligations (i.e. Market Abuse Regulation, Dodd Frank). This also includes Control Room & Suitability Monitoring related data, UI and scenario enhancements. Responsibilities Providing guidance and support for company projects and initiatives Working with Stakeholders to implement technology solutions to their requirements on Surveillance platforms. Data analysis and production of technical documentation, formulating testing approaches, test plans and cases. Regular updates to senior stakeholders Supporting business users with user acceptance testing. Presentation of results to senior stakeholders Skills Must have 7+ years of Business Analysis & UAT experience in Capital Markets domain experience. Solid understanding of trade surveillance practices, market abuse regulations, and related compliance requirements (e.g., MAR, Dodd-Frank, MiFID II) Knowledge of trade surveillance tools and systems (e.g., Actimize, SMARTS, or similar) Good knowledge of Python/Pyspark and SQL Strong analytical and problem-solving skills Excellent communication and interpersonal skills, with the ability to collaborate with diverse teams and stakeholders Ability to work independently and manage multiple priorities in a dynamic environment Good understanding of business change and software development life cycles Strong stakeholder management skills Analyse trade data to identify trends, anomalies, and potential risk areas. Generate detailed reports for senior management, compliance, and regulatory bodies to support ongoing monitoring efforts. Analyze trade data to identify trends, anomalies, and potential risk areas. Generate detailed reports for senior management, compliance, and regulatory bodies to support ongoing monitoring efforts. Nice to have Machine Learning, Artificial Intelligence

Posted 5 days ago

Apply

8.0 - 13.0 years

18 - 22 Lacs

Mumbai, Chennai, Bengaluru

Work from Office

At Capgemini Invent, we believe difference drives change. As inventive transformation consultants, we blend our strategic, creative and scientific capabilities,collaborating closely with clients to deliver cutting-edge solutions. Join us to drive transformation tailored to our client's challenges of today and tomorrow.Informed and validated by science and data. Superpowered by creativity and design. All underpinned by technology created with purpose. Your role In this role you will play a key role in Data Strategy - We are looking for a 8+ years experience in Data Strategy (Tech Architects, Senior BAs) who will support our product, sales, leadership teams by creating data-strategy roadmaps. The ideal candidate is adept at understanding the as-is enterprise data models to help Data-Scientists/ Data Analysts to provide actionable insights to the leadership. They must have strong experience in understanding data, using a variety of data tools. They must have a proven ability to understand current data pipeline and ensure minimal cost-based solution architecture is created & must be comfortable working with a wide range of stakeholders and functional teams. The right candidate will have a passion for discovering solutions hidden in large data sets and working with stakeholders to improve business outcomes. Identify, design, and recommend internal process improvementsautomating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc. & identify data tools for analytics and data scientist team members that assist them in building and optimizing our product into an innovative industry leader. Work with data and analytics experts to create frameworks for digital twins/ digital threads having relevant experience in data exploration & profiling, involve in data literacy activities for all stakeholders & coordinating with cross functional team ; aka SPOC for global master data Your Profile 8+ years of experience in a Data Strategy role, who has attained a Graduate degree in Computer Science, Informatics, Information Systems, or another quantitative field. They should also have experience using the following software/tools - Experience with understanding big data toolsHadoop, Spark, Kafka, etc. & experience with understanding relational SQL and NoSQL databases, including Postgres and Cassandra/Mongo dB & experience with understanding data pipeline and workflow management toolsLuigi, Airflow, etc. 5+ years of Advanced working SQL knowledge and experience working with relational databases, query authoring (SQL) as well as working familiarity with a variety of databases.Postgres/ SQL/ Mongo & 2+ years working knowledge in Data StrategyData Governance/ MDM etc. Having 5+ years of experience in creating data strategy frameworks/ roadmaps, in Analytics and data maturity evaluation based on current AS-is vs to-be framework and in creating functional requirements document, Enterprise to-be data architecture. Relevant experience in identifying and prioritizing use case by for business; important KPI identification opex/capex for CXO's with 2+ years working knowledge in Data StrategyData Governance/ MDM etc. & 4+ year experience in Data Analytics operating model with vision on prescriptive, descriptive, predictive, cognitive analytics What you will love about working here We recognize the significance of flexible work arrangements to provide support. Be it remote work, or flexible work hours, you will get an environment to maintain healthy work life balance. At the heart of our mission is your career growth. Our array of career growth programs and diverse professions are crafted to support you in exploring a world of opportunities. Equip yourself with valuable certifications in the latest technologies such as Generative AI. Location - Bengaluru,Mumbai,Chennai,Pune,Hyderabad,Noida

Posted 5 days ago

Apply

0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Ciklum is looking for a Data Engineer to join our team full-time in India. We are a custom product engineering company that supports both multinational organizations and scaling startups to solve their most complex business challenges. With a global team of over 4,000 highly skilled developers, consultants, analysts and product owners, we engineer technology that redefines industries and shapes the way people live. About the role: As a Data Engineer, become a part of a cross-functional development team who is working with GenAI solutions for digital transformation across Enterprise Products. The prospective team you will be working with is responsible for the design, development, and deployment of innovative, enterprise technology, tools, and standard processes to support the delivery of tax services. The team focuses on the ability to deliver comprehensive, value-added, and efficient tax services to our clients. It is a dynamic team with professionals of varying backgrounds from tax technical, technology development, change management, and project management. The team consults and executes on a wide range of initiatives involving process and tool development and implementation including training development, engagement management, tool design, and implementation. Responsibilities: Responsible for the building, deployment, and maintenance of mission-critical analytics solutions that process terabytes of data quickly at big-data scales Contributes design, code, configurations, manage data ingestion, real-time streaming, batch processing, ETL across multiple data storages Responsible for performance tuning of complicated SQL queries and Data flows Requirements: Experience coding in SQL/Python, with solid CS fundamentals including data structure and algorithm design Hands-on implementation experience working with a combination of the following technologies: Hadoop, Map Reduce, Kafka, Hive, Spark, SQL and NoSQL data warehouses Experience in Azure cloud data platform Experience working with vector databases (Milvus, Postgres, etc.) Knowledge of embedding models and retrieval-augmented generation (RAG) architectures Understanding of LLM pipelines, including data preprocessing for GenAI models Experience deploying data pipelines for AI/ML workloads(*), ensuring scalability and efficiency Familiarity with model monitoring(*), feature stores (Feast, Vertex AI Feature Store), and data versioning Experience with CI/CD for ML pipelines(*) (Kubeflow, MLflow, Airflow, SageMaker Pipelines) Understanding of real-time streaming for ML model inference (Kafka, Spark Streaming) Knowledge of Data Warehousing, design, implementation and optimization Knowledge of Data Quality testing, automation and results visualization Knowledge of BI reports and dashboards design and implementation (PowerBI) Experience with supporting data scientists and complex statistical use cases highly desirable What`s in it for you? Strong community: Work alongside top professionals in a friendly, open-door environment Growth focus: Take on large-scale projects with a global impact and expand your expertise Tailored learning: Boost your skills with internal events (meetups, conferences, workshops), Udemy access, language courses, and company-paid certifications Endless opportunities: Explore diverse domains through internal mobility, finding the best fit to gain hands-on experience with cutting-edge technologies Care: We’ve got you covered with company-paid medical insurance, mental health support, and financial & legal consultations About us: At Ciklum, we are always exploring innovations, empowering each other to achieve more, and engineering solutions that matter. With us, you’ll work with cutting-edge technologies, contribute to impactful projects, and be part of a One Team culture that values collaboration and progress. India is a strategic innovation hub for Ciklum, with growing teams in Chennai and Pune leading advancements in EdgeTech, AR/VR, IoT, and beyond. Join us to collaborate on game-changing solutions and take your career to the next level. Want to learn more about us? Follow us on Instagram , Facebook , LinkedIn . Explore, empower, engineer with Ciklum! Interested already? We would love to get to know you! Submit your application. We can’t wait to see you at Ciklum.

Posted 5 days ago

Apply

15.0 - 20.0 years

5 - 9 Lacs

Mumbai

Work from Office

Project Role : Application Developer Project Role Description : Design, build and configure applications to meet business process and application requirements. Must have skills : Data Engineering Good to have skills : NAMinimum 2 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As an Application Developer, you will design, build, and configure applications to meet business process and application requirements. A typical day involves collaborating with team members to understand project needs, developing application features, and ensuring that the applications function seamlessly within the business environment. You will also engage in testing and troubleshooting to enhance application performance and user experience, while continuously seeking ways to improve processes and deliver high-quality solutions. Roles & Responsibilities:Design, build, and maintain scalable ETL/ELT pipelines for structured and unstructured data.Monitor and analyze key performance metrics (e.g., CTR, CPC, ROAS) to support business objectives Implement real-time data workflows with anomaly detection and performance reporting.Develop and maintain data infrastructure using tools such as Spark, Hadoop, Kafka, and AirflowCollaborate with DevOps teams to deploy data solutions in containerized environments (Docker, Kubernetes).Partner with data scientists to prepare, cleanse, and transform data for modeling.Support the development of predictive models using tools like BigQuery ML and Scikit-learn Work closely with stakeholders across product, design, and executive teams to understand data needs Ensure compliance with data governance, privacy, and security standards. Professional & Technical Skills: 1-2 years of experience in data engineering or a similar role.Familiarity with cloud platforms (AWS, GCP, or Azure) and big data tools (Hive, HBase, Spark).Familiarity with DevOps practices and CI/CD pipelines. Additional InformationThis position is based at our Mumbai office.Masters degree in Computer Science, Engineering, or a related field. Qualification 15 years full time education

Posted 5 days ago

Apply

15.0 - 20.0 years

9 - 14 Lacs

Hyderabad

Work from Office

Project Role : AI / ML Engineer Project Role Description : Develops applications and systems that utilize AI to improve performance and efficiency, including but not limited to deep learning, neural networks, chatbots, natural language processing. Must have skills : Google Cloud Machine Learning Services Good to have skills : Google Pub/Sub, GCP Dataflow, Google DataprocMinimum 2 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As an AI / ML Engineer, you will engage in the development of applications and systems that leverage artificial intelligence to enhance performance and efficiency. Your typical day will involve collaborating with cross-functional teams to design and implement innovative solutions, utilizing advanced technologies such as deep learning and natural language processing. You will also be responsible for analyzing data and refining algorithms to ensure optimal functionality and user experience, while continuously exploring new methodologies to drive improvements in AI applications. Roles & Responsibilities:- Expected to perform independently and become an SME.- Required active participation/contribution in team discussions.- Contribute in providing solutions to work related problems.- Assist in the design and development of AI-driven applications to meet project requirements.- Collaborate with team members to troubleshoot and resolve technical challenges. Professional & Technical Skills: - Must To Have Skills: Proficiency in Google Cloud Machine Learning Services.- Good To Have Skills: Experience with GCP Dataflow, Google Pub/Sub, Google Dataproc.- Strong understanding of machine learning frameworks and libraries.- Experience in deploying machine learning models in cloud environments.- Familiarity with data preprocessing and feature engineering techniques. Additional Information:- The candidate should have minimum 2 years of experience in Google Cloud Machine Learning Services.- This position is based at our Hyderabad office.- A 15 years full time education is required. Qualification 15 years full time education

Posted 5 days ago

Apply

9.0 - 13.0 years

10 - 20 Lacs

Bengaluru

Work from Office

Role & responsibilities Position: ABO Senior L3 Dev Support Location: Bengaluru (In-office, 5 days/week; occasional remote possible) Experience: 9+ years Key Requirement: Strong Java coding and debugging skills; scripting and SQL experience preferred Core Skills: Proficiency in Java; knowledge of C/C++ is a plus Experience with SQL and RDBMS (Sybase ASE/IQ, DB2) Familiarity with scripting languages: Shell, Perl, JavaScript, Python Exposure to Big Data and cluster computing (Spark, Hadoop, HDFS) is a plus Strong problem-solving and debugging skills Effective communication and stakeholder engagement Responsibilities: Analyze and resolve complex application issues collaboratively with team members Debug code and analyze logs to identify defects and trends Manage incidents and communicate with users and stakeholders Recommend permanent fixes to improve application stability and performance Leverage automation tools (including ML-based) to detect and remediate failures Identify and automate repetitive alerts/processes in collaboration with engineering teams Challenge existing setups and suggest improvements for stability and efficiency Participate in change management to mitigate production risks Build and enhance runbooks to reduce operational errors and improve efficiency Develop reports to track application health and support performance trends

Posted 5 days ago

Apply

2.0 - 5.0 years

4 - 7 Lacs

Vijayawada

Work from Office

Job Overview Branch launched in India in early 2019 and has seen rapid adoption and growth. We are expanding our product portfolio as well as our user base in all our markets including India. We are looking for talented Machine Learning Engineers to join us and be part of this journey. You will work closely with other Engineers, Product Managers, and underwriters to develop, improve, and deploy machine learning models and to solve other optimization problems. We make extensive use of machine learning in our credit product, where it is used (among other things) for underwriting and loan servicing decisions. We are also actively exploring other applications of Machine Learning in some of our newer products, with the ultimate goal of improving the user experience.Machine Learning sits at the intersection of a number of different disciplines: Computer Science, Statistics, Operations Research, Data Science, and others. At Branch, we fundamentally believe that in order for Machine Learning to be impactful, it needs to be closely embedded into the rest of the product development and software engineering process, which is why we emphasize the importance of software engineering skills and experience for this role.As a company, we are passionate about our customers, fearless in the face of barriers, and driven by data. As an engineering team, we value bottom-up innovation and decentralized decision-making. We believe the best ideas can come from anyone in the company, and we are working hard to create an environment where everyone feels empowered to propose solutions to the challenges we face. We are looking for individuals who thrive in a fast-moving, innovative, and customer-focused setting. Responsibilities Credit Decisions: Core to our business is understanding and building signals from unstructured and structured data to identify good borrowers.Customer Service: Using machine learning and LLM/NLP, automate customer service interactions and provide context to our customer service team.Fraud Prevention: Identify patterns of fraudulent behavior and build models to detect and prevent these behaviors.Team work: Bring your experience to bear on the technical direction and abilities of the team, and work cross-functionally with policy and product teams as we improve processes and break new ground. Qualifications 2+ years of hands-on experience building software in a production environment. Startup or early-stage team experience is preferred.Excellent software engineering and programming skills, especially Python and SQL.A diverse range of data skills, including experimentation, statistics, and machine learning, and have used these skills to inform business decisions.A deep understanding of using cloud computing infrastructure and data pipelines in production.Self motivation: You teach yourself new skills. You take the initiative to solve problems before they arise. You roll up your sleeves and get stuff done.Team motivation: You listen to others, speak your mind, and ask the right questions. You are a great collaborator and teacher.The drive to make a positive impact on customers' lives.

Posted 5 days ago

Apply

2.0 - 5.0 years

4 - 7 Lacs

Bareilly

Work from Office

Job Overview Branch launched in India in early 2019 and has seen rapid adoption and growth. We are expanding our product portfolio as well as our user base in all our markets including India. We are looking for talented Machine Learning Engineers to join us and be part of this journey. You will work closely with other Engineers, Product Managers, and underwriters to develop, improve, and deploy machine learning models and to solve other optimization problems. We make extensive use of machine learning in our credit product, where it is used (among other things) for underwriting and loan servicing decisions. We are also actively exploring other applications of Machine Learning in some of our newer products, with the ultimate goal of improving the user experience.Machine Learning sits at the intersection of a number of different disciplines: Computer Science, Statistics, Operations Research, Data Science, and others. At Branch, we fundamentally believe that in order for Machine Learning to be impactful, it needs to be closely embedded into the rest of the product development and software engineering process, which is why we emphasize the importance of software engineering skills and experience for this role.As a company, we are passionate about our customers, fearless in the face of barriers, and driven by data. As an engineering team, we value bottom-up innovation and decentralized decision-making. We believe the best ideas can come from anyone in the company, and we are working hard to create an environment where everyone feels empowered to propose solutions to the challenges we face. We are looking for individuals who thrive in a fast-moving, innovative, and customer-focused setting. Responsibilities Credit Decisions: Core to our business is understanding and building signals from unstructured and structured data to identify good borrowers.Customer Service: Using machine learning and LLM/NLP, automate customer service interactions and provide context to our customer service team.Fraud Prevention: Identify patterns of fraudulent behavior and build models to detect and prevent these behaviors.Team work: Bring your experience to bear on the technical direction and abilities of the team, and work cross-functionally with policy and product teams as we improve processes and break new ground. Qualifications 2+ years of hands-on experience building software in a production environment. Startup or early-stage team experience is preferred.Excellent software engineering and programming skills, especially Python and SQL.A diverse range of data skills, including experimentation, statistics, and machine learning, and have used these skills to inform business decisions.A deep understanding of using cloud computing infrastructure and data pipelines in production.Self motivation: You teach yourself new skills. You take the initiative to solve problems before they arise. You roll up your sleeves and get stuff done.Team motivation: You listen to others, speak your mind, and ask the right questions. You are a great collaborator and teacher.The drive to make a positive impact on customers' lives.

Posted 5 days ago

Apply

13.0 - 18.0 years

13 - 17 Lacs

Bengaluru

Work from Office

Skill required: Tech for Operations - Artificial Intelligence (AI) Designation: AI/ML Computational Science Manager Qualifications: Any Graduation/Post Graduate Diploma in Management Years of Experience: 13 to 18 years Language - Ability: English(Domestic) - Advanced About Accenture Combining unmatched experience and specialized skills across more than 40 industries, we offer Strategy and Consulting, Technology and Operations services, and Accenture Song all powered by the worlds largest network of Advanced Technology and Intelligent Operations centers. Our 699,000 people deliver on the promise of technology and human ingenuity every day, serving clients in more than 120 countries. Visit us at www.accenture.com What would you do You will be part of the Technology for Operations team that acts as a trusted advisor and partner to Accenture Operations. The team provides innovative and secure technologies to help clients build an intelligent operating model, driving exceptional results. We work closely with the sales, offering and delivery teams to identify and build innovative solutions.The Tech For Operations (TFO) team provides innovative and secure technologies to help clients build an intelligent operating model, driving exceptional results. Works closely with the sales, offering and delivery teams to identify and build innovative solutions. Major sub deals include AHO(Application Hosting Operations), ISMT (Infrastructure Management), Intelligent AutomationUnderstanding of foundational principles and knowledge of Artificial Intelligence AI including concepts, techniques, and tools in order to use AI effectively. What are we looking for Python (Programming Language)Python Software DevelopmentPySparkMicrosoft SQL ServerMicrosoft SQL Server Integration Services (SSIS)Ability to work well in a teamWritten and verbal communicationNumerical abilityResults orientation1 CL7 Data Engineers Roles and Responsibilities: In this role you are required to identify and assess complex problems for area of responsibility The person would create solutions in situations in which analysis requires an in-depth evaluation of variable factors Requires adherence to strategic direction set by senior management when establishing near-term goals Interaction of the individual is with senior management at a client and/or within Accenture, involving matters that may require acceptance of an alternate approach Some latitude in decision-making in involved you will act independently to determine methods and procedures on new assignments Decisions individual at this role makes have a major day to day impact on area of responsibility The person manages large - medium sized teams and/or work efforts (if in an individual contributor role) at a client or within Accenture Please note that this role may require you to work in rotational shifts Qualification Any Graduation,Post Graduate Diploma in Management

Posted 5 days ago

Apply

3.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Job Description We are looking for an enthusiastic AI/ML Developer with 3-5 years of experience to design, develop, and deploy AI/ML solutions. The ideal candidate is passionate about AI, skilled in machine learning, deep learning, and MLOps, and eager to work on cutting-edge projects. Key Skills & Experience: Programming: Python (TensorFlow, PyTorch, Scikit-learn, Pandas). Machine Learning: Supervised, Unsupervised, Deep Learning, NLP, Computer Vision. Model Deployment: Flask, FastAPI, AWS SageMaker, Google Vertex AI, Azure ML. MLOps & Cloud: Docker, Kubernetes, MLflow, Kubeflow, CI/CD pipelines. Big Data & Databases: Spark, Dask, SQL, NoSQL (PostgreSQL, MongoDB). Soft Skills: Strong analytical and problem-solving mindset. Passion for AI innovation and continuous learning. Excellent teamwork and communication abilities. Qualifications: Bachelor’s/Master’s in Computer Science, AI, Data Science, or related fields. AI/ML certifications are a plus. Career Level - IC4 Responsibilities We are looking for an enthusiastic AI/ML Developer with 3-5 years of experience to design, develop, and deploy AI/ML solutions. The ideal candidate is passionate about AI, skilled in machine learning, deep learning, and MLOps, and eager to work on cutting-edge projects. Key Skills & Experience: Programming: Python (TensorFlow, PyTorch, Scikit-learn, Pandas). Machine Learning: Supervised, Unsupervised, Deep Learning, NLP, Computer Vision. Model Deployment: Flask, FastAPI, AWS SageMaker, Google Vertex AI, Azure ML. MLOps & Cloud: Docker, Kubernetes, MLflow, Kubeflow, CI/CD pipelines. Big Data & Databases: Spark, Dask, SQL, NoSQL (PostgreSQL, MongoDB). Soft Skills: Strong analytical and problem-solving mindset. Passion for AI innovation and continuous learning. Excellent teamwork and communication abilities. Qualifications: Bachelor’s/Master’s in Computer Science, AI, Data Science, or related fields. AI/ML certifications are a plus. Diversity & Inclusion: An Oracle career can span industries, roles, Countries and cultures, giving you the opportunity to flourish in new roles and innovate, while blending work life in. Oracle has thrived through 40+ years of change by innovating and operating with integrity while delivering for the top companies in almost every industry. In order to nurture the talent that makes this happen, we are committed to an inclusive culture that celebrates and values diverse insights and perspectives, a workforce that inspires thought leadership and innovation. . Oracle offers a highly competitive suite of Employee Benefits designed on the principles of parity, consistency, and affordability. The overall package includes certain core elements such as Medical, Life Insurance, access to Retirement Planning, and much more. We also encourage our employees to engage in the culture of giving back to the communities where we live and do business. At Oracle, we believe that innovation starts with diversity and inclusion and to create the future we need talent from various backgrounds, perspectives, and abilities. We ensure that individuals with disabilities are provided reasonable accommodation to successfully participate in the job application, interview process, and in potential roles. to perform crucial job functions. That’s why we’re committed to creating a workforce where all individuals can do their best work. It’s when everyone’s voice is heard and valued that we’re inspired to go beyond what’s been done before. About Us As a world leader in cloud solutions, Oracle uses tomorrow’s technology to tackle today’s challenges. We’ve partnered with industry-leaders in almost every sector—and continue to thrive after 40+ years of change by operating with integrity. We know that true innovation starts when everyone is empowered to contribute. That’s why we’re committed to growing an inclusive workforce that promotes opportunities for all. Oracle careers open the door to global opportunities where work-life balance flourishes. We offer competitive benefits based on parity and consistency and support our people with flexible medical, life insurance, and retirement options. We also encourage employees to give back to their communities through our volunteer programs. We’re committed to including people with disabilities at all stages of the employment process. If you require accessibility assistance or accommodation for a disability at any point, let us know by emailing accommodation-request_mb@oracle.com or by calling +1 888 404 2494 in the United States. Oracle is an Equal Employment Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, sexual orientation, gender identity, disability and protected veterans’ status, or any other characteristic protected by law. Oracle will consider for employment qualified applicants with arrest and conviction records pursuant to applicable law.

Posted 5 days ago

Apply

3.0 - 8.0 years

4 - 8 Lacs

Bengaluru

Work from Office

Project Role : Data Engineer Project Role Description : Design, develop and maintain data solutions for data generation, collection, and processing. Create data pipelines, ensure data quality, and implement ETL (extract, transform and load) processes to migrate and deploy data across systems. Must have skills : Snowflake Data Warehouse Good to have skills : NAMinimum 3 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As a Data Engineer, you will design, develop, and maintain data solutions that facilitate data generation, collection, and processing. Your typical day will involve creating data pipelines, ensuring data quality, and implementing ETL processes to migrate and deploy data across various systems. You will collaborate with cross-functional teams to understand data requirements and deliver effective solutions that meet business needs, while also troubleshooting any issues that arise in the data flow. Roles & Responsibilities:- Expected to perform independently and become an SME.- Required active participation/contribution in team discussions.- Contribute in providing solutions to work related problems.- Assist in the optimization of data pipelines for improved performance and efficiency.- Collaborate with stakeholders to gather requirements and translate them into technical specifications. Professional & Technical Skills: - Must To Have Skills: Proficiency in Snowflake Data Warehouse.- Strong understanding of ETL processes and data integration techniques.- Experience with data modeling and database design principles.- Familiarity with data quality frameworks and best practices.- Knowledge of cloud data warehousing solutions and architecture. Additional Information:- The candidate should have minimum 3 years of experience in Snowflake Data Warehouse.- This position is based at our Bengaluru office.- A 15 years full time education is required. Qualification 15 years full time education

Posted 5 days ago

Apply

3.0 - 8.0 years

10 - 14 Lacs

Bengaluru

Work from Office

Project Role : Application Lead Project Role Description : Lead the effort to design, build and configure applications, acting as the primary point of contact. Must have skills : Talend ETL Good to have skills : NAMinimum 3 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As an Application Lead, you will lead the effort to design, build, and configure applications, acting as the primary point of contact. Your typical day will involve collaborating with various stakeholders to gather requirements, overseeing the development process, and ensuring that the applications meet the specified needs. You will also engage in problem-solving discussions with your team, providing guidance and support to ensure successful project outcomes. Your role will require you to stay updated on industry trends and best practices to enhance application performance and user experience. Roles & Responsibilities:- Expected to perform independently and become an SME.- Required active participation/contribution in team discussions.- Contribute in providing solutions to work related problems.- Facilitate communication between technical teams and stakeholders to ensure alignment on project goals.- Mentor junior team members, providing them with guidance and support in their professional development. Professional & Technical Skills: - Must To Have Skills: Proficiency in Talend ETL.- Strong understanding of data integration processes and methodologies.- Experience with data warehousing concepts and practices.- Familiarity with SQL and database management systems.- Ability to troubleshoot and resolve technical issues related to application performance. Additional Information:- The candidate should have minimum 3 years of experience in Talend ETL.- This position is based at our Bengaluru office.- A 15 years full time education is required. Qualification 15 years full time education

Posted 5 days ago

Apply

10.0 - 14.0 years

12 - 16 Lacs

Bengaluru

Work from Office

Skill required: Data Management - Structured Query Language (SQL) Designation: Data Eng, Mgmt & Governance Assoc Mgr Qualifications: BE/BTech Years of Experience: 10 to 14 years About Accenture Combining unmatched experience and specialized skills across more than 40 industries, we offer Strategy and Consulting, Technology and Operations services, and Accenture Song all powered by the worlds largest network of Advanced Technology and Intelligent Operations centers. Our 699,000 people deliver on the promise of technology and human ingenuity every day, serving clients in more than 120 countries. Visit us at www.accenture.com What would you do Were Accenture Marketing Operations. Were the global managed services arm of Accenture Interactive. We sit in the Operations Business to take advantage of the industrialized run capabilities leveraging investments from Accenture OperationsOur quest is to activate the best experiences on the planet by driving value across every customer interaction to maximize marketing performance. We combine deep functional and technical expertise to future-proof our clients business while accelerating time-to-market and operating efficiently at scale.We are digital professionals committed to providing innovative, end-to-end customer experience solutions focusing on operating marketing models that help businesses transform and excel in the new world, with an ecosystem that empowers our clients to implement the changes necessary to support the transformation of their businesses.Domain-specific language used in programming and designed for querying and modifying data and managing databases. What are we looking for Data collation, analysis, and reporting capability Expert in GCP, SQL, Excel Create metadata, pipelines for analytics Ability to mentor and guide team members Good understanding of database concepts Prior experience of handling large volumes of data for Adhoc analysis and standard business reporting, data wrangling Developing pipelines and deploying jobs in production e.g. using Airflow Excellent verbal and written communication Strong team player and excellent problem-solving skills Ability to work in a global collaborative team environment Attention to detail and ability to work in high pressure environment Experience with Python Experience in CPG, Retail, Beauty Previous experience of BI tools (e.g. Looker or Tableau or Power BI) Roles and Responsibilities: Build/Maintain logical, physical, and semantic data models that support marketing KPIs, campaign reporting, attribution, segmentation, and personalization Build/Maintain scalable schemas, data marts, and analytical layers optimized for performance in BI and analytics applications Maintain metadata standards, data catalogs, and documentation to ensure data traceability and discoverability Lead efforts to ingest, standardize, and harmonize data from multiple marketing sources Ensure compliance with data privacy regulations (e.g., GDPR, CCPA) and define access controls and security models for sensitive customer data Perform bug diagnosis and fix Qualification BE,BTech

Posted 5 days ago

Apply

5.0 - 10.0 years

10 - 14 Lacs

Bengaluru

Work from Office

Project Role : Application Lead Project Role Description : Lead the effort to design, build and configure applications, acting as the primary point of contact. Must have skills : AWS Redshift Good to have skills : PySparkMinimum 5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As an Application Lead, you will lead the effort to design, build, and configure applications, acting as the primary point of contact. You will oversee the development process and ensure successful project delivery. Roles & Responsibilities:- Expected to be an SME- Collaborate and manage the team to perform- Responsible for team decisions- Engage with multiple teams and contribute on key decisions- Provide solutions to problems for their immediate team and across multiple teams- Lead the application development process- Coordinate with stakeholders to gather requirements- Ensure timely delivery of projects Professional & Technical Skills: - Must To Have Skills: Proficiency in AWS Glue- Good To Have Skills: Experience with PySpark- Strong understanding of ETL processes- Experience in data transformation and integration- Knowledge of cloud computing platforms- Ability to troubleshoot and resolve technical issues Additional Information:- The candidate should have a minimum of 5 years of experience in AWS Glue- This position is based at our Bengaluru office- A 15 years full-time education is required Qualification 15 years full time education

Posted 5 days ago

Apply

3.0 - 8.0 years

1 - 3 Lacs

Hyderabad, Chennai, Bengaluru

Work from Office

Immediate Joiners Only-0-15 days Only Considered 3+Years Mandatory Work Mode-Hybrid Work Loation: Hyderabad, Bengaluru,Chennai,Pune Mandatory Skills: Azure, ADF, Spark, Astronomer Data Engineering topics Kafka based ingestion API based ingestion Astronomer, Apache Airflow, dagster, etc. (orchestration tools) Familiarity with Apache Iceberg, Delta & Hudi table designs when to use, why to use & how to use Spark architecture Optimization techniques Performance issues and mitigation techniques Data Quality topics Data engineering without quality provides no value Great Expectations (https://docs.greatexpectations.io/docs/core/introduction/try_gx/) Pydeequ (https://pydeequ.readthedocs.io/en/latest/index.html) Databricks – DLT expectations (Spark based)

Posted 5 days ago

Apply

15.0 - 20.0 years

4 - 8 Lacs

Chennai

Work from Office

Project Role : Data Engineer Project Role Description : Design, develop and maintain data solutions for data generation, collection, and processing. Create data pipelines, ensure data quality, and implement ETL (extract, transform and load) processes to migrate and deploy data across systems. Must have skills : Agile Project Management Good to have skills : Apache SparkMinimum 7.5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As a Data Engineer, you will design, develop, and maintain data solutions that facilitate data generation, collection, and processing. Your typical day will involve creating data pipelines, ensuring data quality, and implementing ETL processes to effectively migrate and deploy data across various systems. You will collaborate with cross-functional teams to understand data requirements and deliver solutions that meet business needs, while also troubleshooting any issues that arise in the data flow and processing stages. Your role will be pivotal in ensuring that data is accessible, reliable, and ready for analysis, contributing to informed decision-making across the organization. Roles & Responsibilities:- Expected to be an SME.- Collaborate and manage the team to perform.- Responsible for team decisions.- Engage with multiple teams and contribute on key decisions.- Provide solutions to problems for their immediate team and across multiple teams.- Mentor junior team members to enhance their skills and knowledge in data engineering practices.- Continuously evaluate and improve data processes to enhance efficiency and effectiveness. Professional & Technical Skills: - Must To Have Skills: Proficiency in Agile Project Management.- Good To Have Skills: Experience with Apache Spark, Google Cloud SQL, Python (Programming Language).- Strong understanding of data pipeline architecture and design principles.- Experience with ETL tools and data integration techniques.- Familiarity with data quality frameworks and best practices. Additional Information:- The candidate should have minimum 7.5 years of experience in Agile Project Management.- This position is based in Chennai (Mandatory).- A 15 years full time education is required. Qualification 15 years full time education

Posted 5 days ago

Apply

5.0 - 8.0 years

10 - 14 Lacs

Bharatpur

Work from Office

Role Overview: We’re on the lookout for a Buzz Lead – Social & Audience Growth who lives at the intersection of content, culture, and conversation. In this role, you’ll shape the voice of our brand across platforms—especially Facebook and Instagram—by creating buzzworthy content, driving engagement, and turning attention into action. You won’t just post—you’ll spark. Whether it’s a viral moment, a timely trend, or a high-impact campaign, you’ll lead the charge in making our brand part of everyday conversations. You’ll also own key performance metrics like link clicks, CTR, and reach, working closely with editorial, design, and growth teams to scale our presence and deepen our connection with the audience. Reporting and Scope: This role reports to the Head of Content & Co-Founder (Jaskirat Arora) and will lead a broader team structure comprising 6–7 Sub-Group Heads and 15–20 Content Associates. What You’ll DoDevelop and execute high-impact social media strategies tailored to each platform—including Facebook, Instagram, TikTok, Threads, Flipboard, Twitter/X, and YouTube Shorts—to drive audience growth, engagement, and performancePlan and optimize paid campaigns (especially on Meta) with a performance-first mindset—focused on metrics like link clicks, CTR, and conversionsAmplify editorial content across social platforms through timely, high-impact posts—without owning editorial planningLaunch real-time buzz campaigns that align with trending topics, pop culture moments, or live events—working like a live editorial desk on socialIdentify and implement new content formats (Reels, carousels, polls, memes) to keep the brand culturally in-tune and algorithm-friendlyContinuously experiment with creatives, copy, and targeting to drive performance—test fast, learn fasterMonitor social media sentiment, moderate comments, and respond quickly during crisis moments to protect and shape the brand’s voicePartner with niche influencers, creators, and fan communities to co-create content and drive organic engagementTrack and report performance across platforms using tools like Meta Ads Manager, CrowdTangle, Chartbeat, TweetDeck, and native analytics—focused on clicks to conversionsExperience with tools like Buffer, Later, or Sprout Social is a plus. Partner with growth, product, SEO, and distribution teams to inform and support their KPIs—such as traffic, retention, and content discoverability—while owning execution purely from a social amplification lensStay up to date on algorithm changes, emerging platforms, and competitor activity to ensure the brand remains ahead of the curve

Posted 5 days ago

Apply

5.0 - 8.0 years

10 - 14 Lacs

Solapur

Work from Office

Role Overview: We’re on the lookout for a Buzz Lead – Social & Audience Growth who lives at the intersection of content, culture, and conversation. In this role, you’ll shape the voice of our brand across platforms—especially Facebook and Instagram—by creating buzzworthy content, driving engagement, and turning attention into action. You won’t just post—you’ll spark. Whether it’s a viral moment, a timely trend, or a high-impact campaign, you’ll lead the charge in making our brand part of everyday conversations. You’ll also own key performance metrics like link clicks, CTR, and reach, working closely with editorial, design, and growth teams to scale our presence and deepen our connection with the audience. Reporting and Scope: This role reports to the Head of Content & Co-Founder (Jaskirat Arora) and will lead a broader team structure comprising 6–7 Sub-Group Heads and 15–20 Content Associates. What You’ll DoDevelop and execute high-impact social media strategies tailored to each platform—including Facebook, Instagram, TikTok, Threads, Flipboard, Twitter/X, and YouTube Shorts—to drive audience growth, engagement, and performancePlan and optimize paid campaigns (especially on Meta) with a performance-first mindset—focused on metrics like link clicks, CTR, and conversionsAmplify editorial content across social platforms through timely, high-impact posts—without owning editorial planningLaunch real-time buzz campaigns that align with trending topics, pop culture moments, or live events—working like a live editorial desk on socialIdentify and implement new content formats (Reels, carousels, polls, memes) to keep the brand culturally in-tune and algorithm-friendlyContinuously experiment with creatives, copy, and targeting to drive performance—test fast, learn fasterMonitor social media sentiment, moderate comments, and respond quickly during crisis moments to protect and shape the brand’s voicePartner with niche influencers, creators, and fan communities to co-create content and drive organic engagementTrack and report performance across platforms using tools like Meta Ads Manager, CrowdTangle, Chartbeat, TweetDeck, and native analytics—focused on clicks to conversionsExperience with tools like Buffer, Later, or Sprout Social is a plus. Partner with growth, product, SEO, and distribution teams to inform and support their KPIs—such as traffic, retention, and content discoverability—while owning execution purely from a social amplification lensStay up to date on algorithm changes, emerging platforms, and competitor activity to ensure the brand remains ahead of the curve

Posted 5 days ago

Apply

15.0 - 20.0 years

10 - 14 Lacs

Pune

Work from Office

Project Role : Application Lead Project Role Description : Lead the effort to design, build and configure applications, acting as the primary point of contact. Must have skills : Microsoft Azure DevOps Good to have skills : NAMinimum 5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As an Application Lead, you will lead the effort to design, build, and configure applications, acting as the primary point of contact. Your typical day will involve collaborating with various teams to ensure project milestones are met, facilitating discussions to address challenges, and guiding the team in implementing effective solutions. You will also engage in strategic planning sessions to align project goals with organizational objectives, ensuring that all stakeholders are informed and involved in the development process. Your role will be pivotal in driving the success of application projects and fostering a collaborative environment among team members. Roles & Responsibilities:- Expected to be an SME.- Collaborate and manage the team to perform.- Responsible for team decisions.- Engage with multiple teams and contribute on key decisions.- Provide solutions to problems for their immediate team and across multiple teams.- Facilitate knowledge sharing sessions to enhance team capabilities.- Monitor project progress and ensure adherence to timelines and quality standards. Professional & Technical Skills: - Must to Have Skills: Proficiency in Microsoft Azure Devops.- Should have demonstrable experience of DevOps processes and practices, experience with Gitlab CI/CD pipelines an advantage.- Should have strong experience with shell scripting.- Should have In-depth knowledge of Linux and Kubernetes.- Should have experience in the Azure ecosystem with a focus on key Azure product offerings such as Azure Kubernetes Services, Azure Databricks , Azure Container Registry and Azure Storage offerings.- Knowledge of the Python programming language, knowledge of the R and SAS languages an advantage.- Should have technical understanding of Distributed and parallel computing technologies such as Apache Spark, In Memory Data Grid or Grid computing. Additional Information:- The candidate should have minimum 5 years of experience in Microsoft Azure DevOps.- A 15 years full time education is required. Qualification 15 years full time education

Posted 5 days ago

Apply

6.0 - 9.0 years

0 - 0 Lacs

pune

On-site

We are hiring for Spark Developer for India MNC Profile: Spark Developer Experience: 6 YRS TO 9 YRS Locations : Pune Responsibilities: Spark with Scala/Python/Java Experience and Expertise in any of the following Languages at least 1 of them : Java, Scala, Python Experience and expertise in SPARK Architecture Experience in the range of 8-10 yrs plus Good Problem Solving and Analytical Skills Ability to Comprehend the Business requirement and translate to the Technical requirements Good communication and collaborative skills with fellow team and across Vendors Familiar with development of life cycle including CI/CD pipelines. Proven experience and interested in supporting existing strategic applications Familiarity working with agile methodology Interested candidate Can there update cv or resume on this no 9582342017 or vimhr11@gmail.com Regards, Kirti Shukla HR Recruiter

Posted 5 days ago

Apply

15.0 - 20.0 years

4 - 8 Lacs

Bengaluru

Work from Office

Project Role : Data Engineer Project Role Description : Design, develop and maintain data solutions for data generation, collection, and processing. Create data pipelines, ensure data quality, and implement ETL (extract, transform and load) processes to migrate and deploy data across systems. Must have skills : Databricks Unified Data Analytics Platform Good to have skills : NAMinimum 5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As a Data Engineer, you will design, develop, and maintain data solutions that facilitate data generation, collection, and processing. Your typical day will involve creating data pipelines, ensuring data quality, and implementing ETL processes to migrate and deploy data across various systems. You will collaborate with cross-functional teams to understand their data needs and provide effective solutions, ensuring that the data infrastructure is robust and scalable to meet the demands of the organization. Roles & Responsibilities:- Expected to be an SME.- Collaborate and manage the team to perform.- Responsible for team decisions.- Engage with multiple teams and contribute on key decisions.- Provide solutions to problems for their immediate team and across multiple teams.- Mentor junior team members to enhance their skills and knowledge in data engineering.- Continuously evaluate and improve data processes to enhance efficiency and effectiveness. Professional & Technical Skills: - Must To Have Skills: Proficiency in Databricks Unified Data Analytics Platform.- Strong understanding of data pipeline architecture and design.- Experience with ETL processes and data integration techniques.- Familiarity with data quality frameworks and best practices.- Knowledge of cloud platforms and services related to data analytics. Additional Information:- The candidate should have minimum 7.5 years of experience in Databricks Unified Data Analytics Platform.- This position is based at our Indore office.- A 15 years full time education is required. Qualification 15 years full time education

Posted 5 days ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies