Jobs
Interviews

16420 Spark Jobs - Page 26

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

4.0 - 8.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Requirements Description and Requirements Job Description and Requirements Position Summary The MetLife Corporate Technology (CT) organization is evolving to enable MetLife’s New Frontier strategy. With a strong vision in place, we are a global function focused on driving digital technology strategies for key corporate functions within MetLife including, Finance, Actuarial, Reinsurance, Legal, Human Resources, Employee Experience, Risk, Treasury, Audit and Compliance. In partnership with our business leaders, we develop and deliver seamless technology experiences to our employees across the entire employee lifecycle. Our vision and mission is to create innovative, transformative and contemporary technology solutions to empower our leaders and employees so they can focus on what matters most, our customers. We are technologists with strong business acumen focused on developing our talent to continually transform and innovate. We are seeking a highly motivated and skilled Azure Data Engineer to join our growing team in Hyderabad. This position is perfect for talented professionals with 4-8 years of experience in designing, building, and maintaining scalable cloud-based data solutions. As an Azure Data Engineer at MetLife, you will collaborate with cross-functional teams to enable data transformation, analytics, and decision-making by leveraging Microsoft Azure’s advanced technologies. He/she should be a strategic thinker, an effective communicator, and an expert in technological development. Key Relationships Internal Stake Holder – Key Responsibilities Design, develop, and maintain efficient and scalable data pipelines using Azure Data Factory (ADF) for ETL/ELT processes. Build and optimize data models and data flows in Azure Synapse Analytics, SQL Databases, and Azure Data Lake. Work with large datasets to define, test, and implement data storage, transformation, and processing strategies using Azure-based services. Create and manage data pipelines for ingesting, processing, and transforming data from various sources into a structured format. Develop solutions for real-time and batch processing using tools like Azure Stream Analytics and Event Hubs. Implement data security, governance, and compliance measures to ensure the integrity and accessibility of the organization’s data assets. Contribute to the migration of on-premises databases and ETL processes to Azure cloud. Build processes to identify, monitor, and resolve data inconsistencies and quality issues. Collaborate with data architects, business analysts, and developers to deliver reliable and performant data solutions aligned with business requirements. Monitor and optimize performance and cost of Azure-based data solutions. Document architectures, data flows, pipelines, and implementations for future reference and knowledge sharing. Knowledge, Skills, And Abilities Education A Bachelors/master's degree in computer science or equivalent Engineering degree. Candidate Qualifications: Education: Bachelor's degree in computer science, Information Systems or related field Experience: Required: 4-8 years of experience in data engineering, with a strong focus on Azure-based services. Proficiency in Azure Data Factory (ADF), Azure Synapse Analytics, Azure Data Lake, and Azure SQL Databases. Strong knowledge of data modeling, ETL/ELT processes, and data pipeline design. Hands-on experience with Python, SQL, and Spark for data manipulation and transformation. Exposure to big data platforms like Hadoop, Databricks, or similar technologies. Experience with real-time data streaming using tools like Azure Stream Analytics, Event Hubs, or Service Bus. Familiarity with data governance, best practices, and security protocols within cloud environments. Solid understanding of Azure DevOps for CI/CD pipelines around data workflows. Strong problem-solving skills with attention to detail and a results-driven mindset. Excellent collaboration, communication, and interpersonal skills for working with cross-functional teams. Preferred: Demonstrated experience in end-to-end cloud data warehouse migrations. Familiarity with Power BI or other visualization tools for creating dashboards and reports. Certification in Azure Data Engineer Associate or Azure Solutions Architect is a plus. Understanding of machine learning concepts and integrating AI/ML pipelines is an advantage. Skills and Competencies: Language: Proficiency at business level in English. Competencies: Communication: Ability to influence and help communicate the organization’s direction and ensure results are achieved Collaboration: Proven track record of building collaborative partnerships and ability to operate effectively in a global environment Diverse environment: Can-do attitude and ability to work in a high paced environment Tech Stack Development & Delivery Methods: Agile (Scaled Agile Framework) DevOps and CI/CD: Azure DevOps Development Frameworks and Languages: SQL Spark Python Azure: Functional Knowledge of cloud based solutions About MetLife Recognized on Fortune magazine's list of the 2025 "World's Most Admired Companies" and Fortune World’s 25 Best Workplaces™ for 2024, MetLife , through its subsidiaries and affiliates, is one of the world’s leading financial services companies; providing insurance, annuities, employee benefits and asset management to individual and institutional customers. With operations in more than 40 markets, we hold leading positions in the United States, Latin America, Asia, Europe, and the Middle East. Our purpose is simple - to help our colleagues, customers, communities, and the world at large create a more confident future. United by purpose and guided by empathy, we’re inspired to transform the next century in financial services. At MetLife, it’s #AllTogetherPossible . Join us!

Posted 3 days ago

Apply

6.0 years

0 Lacs

Hyderabad, Telangana, India

Remote

Summary Position Summary Technical Lead – Big Data & Python skillset As a Technical Lead, you will be responsible as a strong full stack developer and individual contributor responsible to design application modules and deliver from the technical standpoint. High level of skills in coming up with high level design working with the architect and lead in module implementations technically. Must be a strong developer and ability to innovative. Should be a go to person on the assigned modules, applications/ projects and initiatives. Maintains appropriate certifications and applies respective skills on project engagements. Work you’ll do A unique opportunity to be a part of growing Delivery, methods & Tools team that drives consistency, quality, and efficiency of the services delivered to stakeholders. Responsibilities: Full stack hands on developer and strong individual contributor. Go-to person on the assigned projects. Able to understand and implement the project as per the proposed Architecture. Implements best Design Principles and Patterns. Understands and implements the security aspects of the application. Knows ADO and is familiar with using ADO. Obtains/maintains appropriate certifications and applies respective skills on project engagements. Leads or contributes significantly to Practice. Estimates and prioritizes Product Backlogs. Defines work items. Works on unit test automation. Recommend improvements to existing software programs as deemed necessary. Go-to person in the team for any technical issues. Conduct Peer Reviews Conducts Tech sessions within Team. Provides input to standards and guidelines. Implements best practices to enable consistency across all projects. Participate in the continuous improvement processes, as assigned. Mentors and coaches Juniors in the Team. Contributes to POCs. Supports the QA team with clarifications/ doubts. Takes ownership of the deployment, Tollgate, and deployment activities. Oversees the development of documentation. Participates in regular work, status communications and stakeholder updates. Supports development of intellectual capital. Contributes to knowledge network. Acts as a technical escalation point. Conducts sprint review. Does code Optimization and suggests team on the best practices. Skills: Education qualification : BE /B Tech ( IT/CS/Electronics) / MCA / MSc Computer science 6-9years ofIT experience in application development , support or maintenance activities 2+ years of experience in team management. Must have in-depth knowledge of software development lifecycles including agile development and testing. Enterprise Data Management framework , data security & Compliance( optional ). Data Ingestion, Storage n Transformation Data Auditing n Validation ( optional ) Data Visualization with Power BI ( optional ) Data Analytics systems ( optional ) Scaling and Handling large data sets. Designing & Building Data Services using At least 2+ years’ in : Azure SQL DB , SQL Wearhouse, ADF , Azure Storage, ADO CI/CD, Azure Synapse Data Model Design Data Entities : modeling and depiction. Metadata Mgmt( optional ). Database development patterns n practices : SQL / NoSQL ( Relation / Non-Relational – native JSON) , flexi schema, indexing practices, Master / child model data mgmt, Columnar , Row API / SDK for No SQL DBs Ops & Mgmt. Design and Implementation of Data warehouse, Azure Synapse, Data Lake, Delta lake Apace Spark Mgmt Programming Languages PySpark / Python , C#( optional ) API : Invoke / Request n Response PowerShell with Azure CLI ( optional ) Git with ADO Repo Mgmt, Branching Strategies Version control Mgmt Rebasing, filtering , cloning , merging Debugging & Perf Tuning n Optimization skills : Ability to analyze PySpark code, PL/SQL, . Enhancing response times GC Mgmt Debugging and Logging n Alerting techniques. Prior experience that demonstrates good business understanding is needed (experience in a professional services organization is a plus). Excellent written and verbal communications, organization, analytical, planning and leadership skills. Strong management, communication, technical and remote collaboration skill are a must. Experience in dealing with multiple projects and cross-functional teams, and ability to coordinate across teams in a large matrix organization environment. Ability to effectively conduct technical discussions directly with Project/Product management, and clients. Excellent team collaboration skills. Education & Experience: Education qualification: BE /B Tech ( IT/CS/Electronics) / MCA / MSc Computer science 6-9 years of Domain experience or other relevant industry experience. 2+ years of Product owner or Business Analyst or System Analysis experience. Minimum 3+ years of Software development experience in .NET projects. 3+ years of experiencing in Agile / scrum methodology Work timings: 9am-4pm, 7pm- 9pm Location: Hyderabad Experience: 6-9 yrs The team At Deloitte, Shared Services center improves overall efficiency and control while giving every business unit access to the company’s best and brightest resources. It is also lets business units focus on what really matters – satisfying customers and developing new products and services to sustain competitive advantage. A shared services center is a simple concept, but making it work is anything but easy. It involves consolidating and standardizing a wildly diverse collection of systems, processes, and functions. And if requires a high degree of cooperation among business units that generally are not accustomed to working together – with people who do not necessarily want to change. USI shared services team provides a wide array of services to the U.S. and it is constantly evaluating and expanding its portfolio. The shared services team provides call center support, Document Services support, financial processing and analysis support, Record management support, Ethics and compliance support and admin assistant support. How You’ll Grow At Deloitte, we’ve invested a great deal to create a rich environment in which our professionals can grow. We want all our people to develop in their own way, playing to their own strengths as they hone their leadership skills. And, as a part of our efforts, we provide our professionals with a variety of learning and networking opportunities—including exposure to leaders, sponsors, coaches, and challenging assignments—to help accelerate their careers along the way. No two people learn in exactly the same way. So, we provide a range of resources including live classrooms, team-based learning, and eLearning. DU: The Leadership Center in India, our state-of-the-art, world-class learning Center in the Hyderabad offices is an extension of the Deloitte University (DU) in Westlake, Texas, and represents a tangible symbol of our commitment to our people’s growth and development. Explore DU: The Leadership Center in India Benefits At Deloitte, we know that great people make a great organization. We value our people and offer employees a broad range of benefits. Learn more about what working at Deloitte can mean for you. Deloitte’s culture Our positive and supportive culture encourages our people to do their best work every day. We celebrate individuals by recognizing their uniqueness and offering them the flexibility to make daily choices that can help them to be healthy, centered, confident, and aware. We offer well-being programs and are continuously looking for new ways to maintain a culture that is inclusive, invites authenticity, leverages our diversity, and where our people excel and lead healthy, happy lives. Learn more about Life at Deloitte. Corporate citizenship Deloitte is led by a purpose: to make an impact that matters. This purpose defines who we are and extends to relationships with our clients, our people and our communities.We believe that business has the power to inspire and transform. We focus on education, giving, skill-based volunteerism, and leadership to help drive positive social impact in our communities. Learn more about Deloitte’s impact on the world. #CAP-PD Our purpose Deloitte’s purpose is to make an impact that matters for our people, clients, and communities. At Deloitte, purpose is synonymous with how we work every day. It defines who we are. Our purpose comes through in our work with clients that enables impact and value in their organizations, as well as through our own investments, commitments, and actions across areas that help drive positive outcomes for our communities. Our people and culture Our inclusive culture empowers our people to be who they are, contribute their unique perspectives, and make a difference individually and collectively. It enables us to leverage different ideas and perspectives, and bring more creativity and innovation to help solve our clients' most complex challenges. This makes Deloitte one of the most rewarding places to work. Professional development At Deloitte, professionals have the opportunity to work with some of the best and discover what works best for them. Here, we prioritize professional growth, offering diverse learning and networking opportunities to help accelerate careers and enhance leadership skills. Our state-of-the-art DU: The Leadership Center in India, located in Hyderabad, represents a tangible symbol of our commitment to the holistic growth and development of our people. Explore DU: The Leadership Center in India . Benefits To Help You Thrive At Deloitte, we know that great people make a great organization. Our comprehensive rewards program helps us deliver a distinctly Deloitte experience that helps that empowers our professionals to thrive mentally, physically, and financially—and live their purpose. To support our professionals and their loved ones, we offer a broad range of benefits. Eligibility requirements may be based on role, tenure, type of employment and/ or other criteria. Learn more about what working at Deloitte can mean for you. Recruiting tips From developing a stand out resume to putting your best foot forward in the interview, we want you to feel prepared and confident as you explore opportunities at Deloitte. Check out recruiting tips from Deloitte recruiters. Requisition code: 300914

Posted 3 days ago

Apply

3.0 years

0 Lacs

Pune, Maharashtra, India

On-site

YASH Technologies is a leading technology integrator specializing in helping clients reimagine operating models, enhance competitiveness, optimize costs, foster exceptional stakeholder experiences, and drive business transformation. At YASH, we’re a cluster of the brightest stars working with cutting-edge technologies. Our purpose is anchored in a single truth – bringing real positive changes in an increasingly virtual world and it drives us beyond generational gaps and disruptions of the future. We are looking forward to hire AWS Professionals in the following areas : AWS Data Engineer JD As Below Primary skillsets :AWS services including Glue, Pyspark, SQL, Databricks, Python Secondary skillset- Any ETL Tool, Github, DevOPs(CI-CD) Experience: 3-4yrs Degree in computer science, engineering, or similar fields Mandatory Skill Set: Python, PySpark , SQL, AWS with Designing , developing, testing and supporting data pipelines and applications. 3+ years working experience in data integration and pipeline development. 3+ years of Experience with AWS Cloud on data integration with a mix of Apache Spark, Glue, Kafka, Kinesis, and Lambda in S3 Redshift, RDS, MongoDB/DynamoDB ecosystems Databricks, Redshift experience is a major plus. 3+ years of experience using SQL in related development of data warehouse projects/applications (Oracle & amp; SQL Server) Strong real-life experience in python development especially in PySpark in AWS Cloud environment Strong SQL and NoSQL databases like MySQL, Postgres, DynamoDB, Elasticsearch Workflow management tools like Airflow AWS cloud services: RDS, AWS Lambda, AWS Glue, AWS Athena, EMR (equivalent tools in the GCP stack will also suffice) Good to Have : Snowflake, Palantir Foundry At YASH, you are empowered to create a career that will take you to where you want to go while working in an inclusive team environment. We leverage career-oriented skilling models and optimize our collective intelligence aided with technology for continuous learning, unlearning, and relearning at a rapid pace and scale. Our Hyperlearning workplace is grounded upon four principles Flexible work arrangements, Free spirit, and emotional positivity Agile self-determination, trust, transparency, and open collaboration All Support needed for the realization of business goals, Stable employment with a great atmosphere and ethical corporate culture

Posted 3 days ago

Apply

7.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

7+ years of experience in Big Data with strong expertise in Spark and Scala Mandatory Skills: Big Data Primarily Spark and Scala Strong Knowledge in HDFS, Hive, Impala with knowledge on Unix , Oracle, Autosys, Good to Have : Agile Methodology and Banking Expertise Strong Communication Skills Not limited to Spark batch, need Spark streaming experience No SQL DB Experience : HBase/Mongo/Couchbase

Posted 3 days ago

Apply

2.0 years

0 Lacs

Mumbai Metropolitan Region

On-site

Job Description: Business Title QA Manager Years Of Experience 10+ Job Descreption The purpose of this role is to ensure the developed software meets the client requirements and the business’ quality standards within the project release cycle and established processes. To lead QA technical initiatives in order to optimize the test approach and tools. Must Have Skills At least 2 years in a lead role. Experience with Azure cloud. Testing file-based data lake solutions or Big data based solution. Worked on migration or implementation of Azure Data Factory projects. Strong experience in ETL/data pipeline testing, preferably with Azure Data Factory. Proficiency in SQL for data validation and test automation. Familiarity with Azure services: Data Lake, Synapse Analytics, Azure SQL, Key Vault, and Logic Apps. Experience with test management tools (e.g., Azure DevOps, JIRA, TestRail). Understanding of CI/CD pipelines and integration of QA in DevOps workflows. Experience with data quality frameworks (e.g., Great Expectations, Deequ). Knowledge of Python or PySpark for data testing automation. Exposure to Power BI or other BI tools for test result visualization. Azure Data Factory Exposure to Azure Databricks SQL/stored procedure on SQL Server ADLS Gen2 Exposure to Python/ Shell script Good To Have Skills Exposure to any ETL tool experience. Any other Cloud experience (AWS / GCP). Exposure to Spark architecture, including Spark Core, Spark SQL, DataFrame, Spark Streaming, and fault tolerance mechanisms. ISTQB or equivalent QA certification. Working experience on JIRA and Agile Experience with testing SOAP / API projects Stakeholder communication Microsoft Office Key responsibiltes Lead the QA strategy, planning, and execution for ADF-based data pipelines and workflows. Design and implement test plans, test cases, and test automation for data ingestion, transformation, and loading processes. Validate data accuracy, completeness, and integrity across source systems, staging, and target data stores (e.g., Azure SQL, Synapse, Data Lake). Collaborate with data engineers, architects, and business analysts to understand data flows and ensure test coverage. Develop and maintain automated data validation scripts using tools like PySpark, SQL, PowerShell, or Azure Data Factory Data Flows. Monitor and report on data quality metrics, defects, and test coverage. Ensure compliance with data governance, security, and privacy standards. Mentor junior QA team members and coordinate testing efforts across sprints. Education Qulification Minimum Bachelor’s degree in computer science, Information Systems, or related field. Certification If Any Any Basic level certification in AWS / AZURE / GCP Snowflake Associate / Core Shift timing 12 PM to 9 PM and / or 2 PM to 11 PM - IST time zone Location: DGS India - Mumbai - Goregaon Prism Tower Brand: Merkle Time Type: Full time Contract Type: Permanent

Posted 3 days ago

Apply

6.0 - 8.0 years

15 - 27 Lacs

Bengaluru

Work from Office

Job Summary We are seeking a Senior Data Engineer to join our growing data team, where you will help build and scale the data infrastructure powering analytics, machine learning, and product innovation. As a Senior Data Engineer, you will be responsible for designing, building, and optimizing robust, scalable, and secure data pipelines and platforms. You will work closely with data scientists, software engineers, and product teams to deliver clean, reliable data for critical business and clinical applications. Key Responsibilities: Design, implement, and optimize complex data pipelines using advanced SQL, ETL tools, and integration technologies. Collaborate with cross-functional teams to implement optimal data solutions for advanced analytics and data science initiatives. Spearhead process improvements, including automation, data delivery optimization, and infrastructure redesign for scalability. Evaluate and recommend emerging data technologies to build comprehensive data integration strategies. Lead technical discovery processes, defining complex requirements and mapping out detailed scenarios. • Develop and maintain data governance policies and procedures. What Youll Need to Be Successful (Required Skills): 5 -7 years of experience in data engineering or related roles. Advanced proficiency in multiple programming languages (e.g., Python, Java, Scala) and expert-level SQL knowledge. Extensive experience with big data technologies (Hadoop ecosystem, Spark, Kafka) and cloudbased environments (Azure, AWS, or GCP). Proven experience in designing and implementing large-scale data warehousing solutions. Deep understanding of data modeling techniques and enterprise-grade ETL tools. • Demonstrated ability to solve complex analytical problems. Education/ Certifications: Bachelor's degree in computer science, Information Management or related field . Preferred Skills: Experience in the healthcare industry, including clinical, financial, and operational data. Knowledge of machine learning and AI technologies and their data requirements. Familiarity with data visualization tools and real-time data processing. Understanding data privacy regulations and experience implementing compliant solutions Note: We work 5days from Office - India regular shift. Netsmart, India has setup our new Global Capability Centre(GCC) at Godrej Centre, Byatarayanapura (Hebbal area) -(https://maps.app.goo.gl/RviymAeGSvKZESSo6) .

Posted 3 days ago

Apply

8.0 - 10.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Mandatory Skills 8-10 years of exp. Strong proficiency in Python, SQL and experience with data processing libraries (e.g., Pandas, PySpark). Familiarity with Generative AI frameworks like LangChain, LangGraph, or similar tools. Experience integrating APIs from pre-trained AI models (e.g., OpenAI, Cohere, Hugging Face). Solid understanding of data structures, algorithms, and distributed systems. Experience with vector databases (e.g., Pinecone, Postgres). Familiarity with prompt engineering and chaining AI workflows. Understanding of MLOps practices for deploying and monitoring AI applications. Strong problem-solving skills and ability to work in a collaborative environment. Good To Have Experience with Streamlit to build application front-end. Job Description We are looking for an experienced Python Developer with expertise in Spark, SQL, data processing, and building Generative AI applications. The ideal candidate will focus on leveraging existing AI models and frameworks (e.g., LangChain, LangGraph) to create innovative, data-driven solutions. This role does not involve designing new AI models but rather integrating and utilizing pre-trained models to solve real-world problems. Key Responsibilities Develop and deploy Generative AI applications using Python and frameworks like LangChain or LangGraph. Work with large-scale data processing frameworks like Apache Spark, SQL to prepare and manage data pipelines. Integrate pre-trained AI models (e.g., OpenAI, Hugging Face, Llama) into scalable applications. Understands ML - NLP concepts and algorithms with an exposure to Scikit-learn Pytorch. Collaborate with data engineers and product teams to design AI-driven solutions. Optimize application performance and ensure scalability in production environments.

Posted 3 days ago

Apply

4.0 - 6.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Mandatory Skills 4-6 years of exp with basic proficiency in Python, SQL and familiarity with libraries like NumPy or Pandas. Understanding of fundamental programming concepts (data structures, algorithms, etc.). Eagerness to learn new tools and frameworks, including Generative AI technologies. Familiarity with version control systems (e.g., Git). Strong problem-solving skills and attention to detail. Exposure to data processing tools like Apache Spark or PySpark, SQL. Basic understanding of APIs and how to integrate them. Interest in AI/ML and willingness to explore frameworks like LangChain. Familiarity with cloud platforms (AWS, Azure, or GCP) is a plus Job Description We are seeking a motivated Python Developer to join our team. The ideal candidate will have a foundational understanding of Python programming, SQL and a passion for learning and growing in the field of software development. You will work closely with senior developers and contribute to building and maintaining applications, with opportunities to explore Generative AI frameworks and data processing tools. Key Responsibilities Assist in developing and maintaining Python-based applications. Write clean, efficient, and well-documented code. Collaborate with senior developers to integrate APIs and frameworks. Support data processing tasks using libraries like Pandas or PySpark. Learn and work with Generative AI frameworks (e.g., LangChain, LangGraph) under guidance. Debug and troubleshoot issues in existing applications.

Posted 3 days ago

Apply

4.0 - 9.0 years

10 - 20 Lacs

Pune, Chennai, Bengaluru

Hybrid

Experience: 4-10years Location - Pune, Bangalore, Chennai, Noida and Gurgaon Notice period should be Immediate to 30days only Mandatory skill - Apache Spark, JAVA programming Strong knowledge in Apache Spark framework Core Spark, Spark Data Frames, Spark streaming Hands-on experience in any one of the programming languages (Java) Good understanding of distributed programming concepts Experience in optimizing Spark DAG, and Hive queries on Tez Experience using tools like Git, Autosys, Bitbucket, Jira Ability to apply DWH principles within Hadoop environment and NoSQL databases.

Posted 3 days ago

Apply

7.0 - 12.0 years

22 - 27 Lacs

Hyderabad, Pune, Mumbai (All Areas)

Work from Office

Job Description - Snowflake Developer Experience: 7+ years Location: India, Hybrid Employment Type: Full-time Job Summary We are looking for a Snowflake Developer with 7+ years of experience to design, develop, and maintain our Snowflake data platform. The ideal candidate will have strong expertise in Snowflake SQL, data modeling, and ETL/ELT processes to build efficient and scalable data solutions. Key Responsibilities 1. Snowflake Development & Implementation Design and develop Snowflake databases, schemas, tables, and views Write and optimize complex SQL queries, stored procedures, and UDFs Implement Snowflake features (Time Travel, Zero-Copy Cloning, Streams & Tasks) Manage virtual warehouses, resource monitors, and cost optimization 2. Data Pipeline & Integration Build and maintain ETL/ELT pipelines using Snowflake and tools like Snowpark, Python, or Spark Integrate Snowflake with cloud storage (S3, Blob Storage) and data sources (APIs) Develop data ingestion processes (batch and real-time) using Snowpipe 3. Performance Tuning & Optimization Optimize query performance through clustering, partitioning, and indexing Monitor and troubleshoot data pipelines and warehouse performance Implement caching strategies and materialized views for faster analytics 4. Data Modeling & Governance Design star schema, snowflake schema, and normalized data models Implement data security (RBAC, dynamic data masking, row-level security) Ensure data quality, documentation, and metadata management 5. Collaboration & Support Work with analysts, BI teams, and business users to deliver data solutions Document technical specifications and data flows Provide support and troubleshooting for Snowflake-related issues Required Skills & Qualifications 7+ years in database development, data warehousing, or ETL 3+ years of hands-on Snowflake development experience Strong SQL and scripting (Python, Bash) skills Experience with Snowflake utilities (SnowSQL, Snowsight) Knowledge of cloud platforms (AWS, Azure) and data integration tools SnowPro Core Certification (preferred but not required) Experience with Coalesce DBT , Airflow, or other data orchestration tools Familiarity with CI/CD pipelines and DevOps practices Knowledge of data visualization tools (Power BI, Tableau)

Posted 3 days ago

Apply

8.0 - 12.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

About Evernorth Evernorth℠ exists to elevate health for all, because we believe health is the starting point for human potential and progress. As champions for affordable, predictable and simple health care, we solve the problems others don’t, won’t or can’t. Our innovation hub in India will allow us to work with the right talent, expand our global footprint, improve our competitive stance, and better deliver on our promises to stakeholders. We are passionate about making healthcare better by delivering world-class solutions that make a real difference. We are always looking upward. And that starts with finding the right talent to help us get there. Position Overview Excited to grow your career? This position’s primary responsibility will be to translate software requirements into functions using Mainframe , ETL , Data Engineering with expertise in Databricks and Database technologies. This position offers the opportunity to work on modernizing legacy systems, contribute to cloud infrastructure automation, and support production systems in a fast-paced, agile environment. You will work across multiple teams and technologies to ensure reliable, high-performance data solutions that align with business goals. As a Mainframe & ETL Engineer, you will be responsible for the end-to-end development and support of data processing solutions using tools such as Talend, Ab Initio, AWS Glue, and PySpark, with significant work on Databricks and modern cloud data platforms. You will support infrastructure provisioning using Terraform, assist in modernizing legacy systems including mainframe migration, and contribute to performance tuning of complex SQL queries across multiple database platforms including Teradata, Oracle, Postgres, and DB2. You will also be involved in CI/CD practices Responsibilities Support, maintain and participate in the development of software utilizing technologies such as COBOL, DB2, CICS and JCL. Support, maintain and participate in the ETL development of software utilizing technologies such as Talend, Ab-Initio, Python, PySpark using Databricks. Work with Databricks to design and manage scalable data processing solutions. Implement and support data integration workflows across cloud (AWS) and on-premises environments. Support cloud infrastructure deployment and management using Terraform. Participate in the modernization of legacy systems, including mainframe migration. Perform complex SQL queries and performance tuning on large datasets. Contribute to CI/CD pipelines, version control, and infrastructure automation. Provide expertise, tools, and assistance to operations, development, and support teams for critical production issues and maintenance Troubleshoot production issues, diagnose the problem, and implement a solution - First line of defense in finding the root cause Work cross-functionally with the support team, development team and business team to efficiently address customer issues. Active member of high-performance software development and support team in an agile environment Engaged in fostering and improving organizational culture. Qualifications Required Skills: Strong analytical and technical skills. Proficiency in Databricks – including notebook development, Delta Lake, and Spark-based process. Experience with mainframe modernization or migrating legacy systems to modern data platforms. Strong programming skills, particularly in PySpark for data processing. Familiarity with data warehousing concepts and cloud-native architecture. Solid understanding of Terraform for managing infrastructure as code on AWS. Familiarity with CI/CD practices and tools (e.g., Git, Jenkins). Strong SQL knowledge on OLAP DB platforms (Teradata, Snowflake) and OLTP DB platforms (Oracle, DB2, Postgres, SingleStore). Strong experience with Teradata SQL and Utilities Strong experience with Oracle, Postgres and DB2 SQL and Utilities Develop high quality database solutions Ability to do extensive analysis on complex SQL processes and design skills Ability to analyze existing SQL queries for performance improvements Experience in software development phases including design, configuration, testing, debugging, implementation, and support of large-scale, business centric and process-based applications Proven experience working with diverse teams of technical architects, business users and IT areas on all phases of the software development life cycle. Exceptional analytical and problem-solving skills Structured, methodical approach to systems development and troubleshooting Ability to ramp up fast on a system architecture Experience in designing and developing process-based solutions or BPM (business process management) Strong written and verbal communication skills with the ability to interact with all levels of the organization. Strong interpersonal/relationship management skills. Strong time and project management skills. Familiarity with agile methodology including SCRUM team leadership. Familiarity with modern delivery practices such as continuous integration, behavior/test driven development, and specification by example. Desire to work in application support space Passion for learning and desire to explore all areas of IT. Required Experience & Education Minimum of 8-12 years of experience in application development role. Bachelor’s degree equivalent in Information Technology, Business Information Systems, Technology Management, or related field of study. Location & Hours of Work: Hyderabad and Hybrid (13:00 AM IST to 10:00 PM IST) Equal Opportunity Statement Evernorth is an Equal Opportunity Employer actively encouraging and supporting organization-wide involvement of staff in diversity, equity, and inclusion efforts to educate, inform and advance both internal practices and external work with diverse client populations. About Evernorth Health Services Evernorth Health Services, a division of The Cigna Group, creates pharmacy, care and benefit solutions to improve health and increase vitality. We relentlessly innovate to make the prediction, prevention and treatment of illness and disease more accessible to millions of people. Join us in driving growth and improving lives.

Posted 3 days ago

Apply

0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Job Description Some careers shine brighter than others. If you’re looking for a career that will help you stand out, join HSBC and fulfil your potential. Whether you want a career that could take you to the top, or simply take you in an exciting new direction, HSBC offers opportunities, support and rewards that will take you further. HSBC is one of the largest banking and financial services organisations in the world, with operations in 64 countries and territories. We aim to be where the growth is, enabling businesses to thrive and economies to prosper, and, ultimately, helping people to fulfil their hopes and realise their ambitions. We are currently seeking an experienced professional to join our team in the role of Consultant Specialist In this role, you will: Provide the technical expertise for Risk Data Platform and the various software components that supplement it (on-prem & cloud) Implement standards around development, DevSecOps and review code, pull requests Act as a technical expert on the design and implementation of the technology solutions to meet the needs of the Data & Enterprise reporting function on a tactical and strategic basis Accountable for ensuring compliance of the products and services with mandatory and regulatory requirements, control objectives in the risk and control framework and technical currency (in line with published standards and guidelines) and, with the architecture function, implementation of the business imperatives. The role holder must work with the IT communities of practice to maximize automation, increase efficiency and ensure that best practice, and the latest tools, techniques and processes have been adopte Requirements To be successful in this role, you should meet the following requirements: Must have experience in CI/CD - Ansible / Jenkins Must have experience in operating a container orchestration cluster (Kubernetes, Docker) Preferred experience with UNIX, Spark UI and batch framework Proficient understanding of code versioning tools Git. Knowledge of user authentication and authorization between multiple systems, servers, and environments Strong unit test and debugging skill Knowledge of integration of Spark framework & Deltalake Experience with infrastructure scripting solutions such as Python/Shell scripting. Experience in Vault – HashiCopr HSBC is committed to building a culture where all employees are valued, respected and opinions count. We take pride in providing a workplace that fosters continuous professional development, flexible working and opportunities to grow within an inclusive and diverse environment. Personal data held by the Bank relating to employment applications will be used in accordance with our Privacy Statement, which is available on our website. Issued by – HSBC Software Development India

Posted 3 days ago

Apply

10.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Responsibilities And Tasks Lead a team of Data Engineers, developers and production support team members Accountable for performance discussion for direct reports, Engage team members and work with team members on their career development. Responsible for the development, coaching and performance management of those who report to you. Succession planning Participate in design, architecture review and deployment of data warehouse solutions. Lead and drive project requirements and deliverable's Implement solutions that eliminate or minimize technical debt through a well-designed architecture, data model, and lifecycle. Bring together and share best-practice knowledge among the data engineering community. Coach, mentor, and help develop data engineers. Guide and manage the team through operational issues, escalations and resolve business partner issues in a timely manner with strong collaboration and care for business priorities. Ability to learn and be conversational with multiple utilities and tools that help with Operations monitoring and alerting. Collaborate with business partners and other teams to ensure data solutions are available, recover from failures and operate healthy. Contribute to site level initiatives such as hiring, cross pillar leadership collaboration, resource management and engagement Qualifications And Experience 10+ years' developing, delivering, and/or supporting data engineering and advanced analytics solutions. 6+ years ' of experience in managing or leading data engineering teams Intermediate to Advanced level programing experience, preferably Python. Spark experience is a plus Proficient with ELT or ETL techniques for complex data processing Candidate should be strong in Data Structures, Data processing and implementing complex data integrations with application. Good to have knowledge on any visualization tool like Power BI, Tableau. Demonstrate ability to lead multi-functional groups, with diverse interests and requirements, to a common objective. Presentation skills with a high degree of comfort speaking with management and developers. A passion for data and information with strong analytical, problem solving, and organizational skills. The ability to work in a dynamic, fast-paced, work environment. Self-motivated with the ability to work under minimal supervision.

Posted 3 days ago

Apply

4.0 years

0 Lacs

India

On-site

Established in 2004, OLIVER is the world’s first and only specialist in designing, building, and running bespoke in-house agencies and marketing ecosystems for brands. We partner with over 300 clients in 40+ countries and counting. Our unique model drives creativity and efficiency, allowing us to deliver tailored solutions that resonate deeply with audiences. As a part of The Brandtech Group , we're at the forefront of leveraging cutting-edge AI technology to revolutionise how we create and deliver work. Our AI solutions enhance efficiency, spark creativity, and drive insightful decision-making, empowering our teams to produce innovative and impactful results. Job Title: Integrated Designer Role: Freelancer Duration: 3 to 4 months Location: Manila, Philippines About the role: In a highly competitive and saturated marketplace, creating visually stunning, impactful, and effective creative work is essential. We aim to produce industry-leading, world-class work that's truly beautiful, smart, and effective. To help us achieve our goal, we are looking for a strong, highly motivated Integrated Designer to join us in this exciting ambition. As an Integrated Designer, you will be an experienced digital designer who is comfortable taking design projects from brief through to completion, providing new ideas and creativity while working closely with the client’s brand guidelines. You will produce designs to the client’s brief and exacting standards, positively influencing clients with your creative input, and undertaking and pitching new creative concepts. What you will be doing: Producing beautiful and innovative designs for our client’s channels. Responsible for the full design lifecycle from concept to execution. Developing a deep understanding of target audiences and the client’s marketing strategy to deliver high-quality results that have an instant, positive impact on the consumer, promoting products and brands. Pitching creative solutions in response to marketing strategies. Overseeing and managing the preparation of all finished files to comply with correct output specifications. Employing expertise in photo manipulation to enhance visual quality and execution. What you need to be great in this role: A passion for design and conception. Minimum 4 years of relevant experience with a digital agency. Deep affinity with layout, typography, and idea generation. Ability to take ownership of projects from concept through to completion. Consistent record of taking on, interpreting, and delivering design projects within agreed deadlines. Attention to detail with the ability to work under your own initiative. Excellent experience working closely with clients during kick-off. Confident in bringing your own innovative ideas and creativity to projects while working within a broad range of design guidelines across a variety of design collateral. Strong client liaison, with a proven track record of developing strong client relationships. Effectively organizing and prioritizing workloads to manage client delivery while simultaneously influencing clients with your ideas and conceptual creativity. Extensive experience in photo manipulation Excellent Adobe Creative Suite skills. Working knowledge of After Effects animation is a bonus. On and offline design experience with an emphasis on digital. Retouching experience and skills to a mid-level. Confident and comfortable working in a fast-paced, changing client environment. A passionate and inspiring creative. The aptitude to learn new software and programs efficiently and effectively. Self-motivated, working with little supervision. Collaborative team player, open-minded – non-political. Discrete about all confidential and personal information. Driven, proactive, helpful, and enthusiastic team player. Passion for and inquisitive about AI and new technologies Understanding and knowledge of AI tools is beneficial, but ability to learn and digest benefits and features of AI tools is critical Req ID: 14099 Our values shape everything we do: Be Ambitious to succeed Be Imaginative to push the boundaries of what’s possible Be Inspirational to do groundbreaking work Be always learning and listening to understand Be Results-focused to exceed expectations Be actively pro-inclusive and anti-racist across our community, clients and creations OLIVER, a part of the Brandtech Group, is an equal opportunity employer committed to creating an inclusive working environment where all employees are encouraged to reach their full potential, and individual differences are valued and respected. All applicants shall be considered for employment without regard to race, ethnicity, religion, gender, sexual orientation, gender identity, age, neurodivergence, disability status, or any other characteristic protected by local laws. OLIVER has set ambitious environmental goals around sustainability, with science-based emissions reduction targets. Collectively, we work towards our mission, embedding sustainability into every department and through every stage of the project lifecycle.

Posted 3 days ago

Apply

3.0 years

0 Lacs

India

On-site

Established in 2004, OLIVER is the world’s first and only specialist in designing, building, and running bespoke in-house agencies and marketing ecosystems for brands. We partner with over 300 clients in 40+ countries and counting. Our unique model drives creativity and efficiency, allowing us to deliver tailored solutions that resonate deeply with audiences. As a part of The Brandtech Group , we're at the forefront of leveraging cutting-edge AI technology to revolutionise how we create and deliver work. Our AI solutions enhance efficiency, spark creativity, and drive insightful decision-making, empowering our teams to produce innovative and impactful results. Job Title: Digital Designer Role: Freelancer Duration: 3 to 4 months Location: Manila, Philippines About the role: OLIVER is a rapidly expanding creative services agency with a twist – we provide our clients with bespoke dedicated agencies that operate from within their offices. We are building a team of digital focussed individuals who have the ability and confidence to learn on the job and help the client transform their digital marketing capabilities. We are looking for a creative, Digital Designer, who is proficient in digital design with the ability to produce e-commerce, user interface webpage designs and social media designs, including engaging social imagery, e-commerce advertising, eDM designs, illustrations and basic motion graphics. What you will be doing: Create, design, and develop digital assets to support interactive campaigns including, but not limited to; email, microsites, landing pages, social media, eCommerce channels, online banner advertising and other projects with an eye to keep designs on brand Have superior understanding of user flows, customer centric designs and the ability to work within a team (including Project Managers, Content Strategists, copywriters and UX Designers) develop to engaging and great looking user interface designs Confidently propose and present sound rationale for their work, displaying solid understanding of user flows, digital best practice and user requirements Produce basic motion graphics and videos for social channels Collaborate with the existing team to ensure designs are innovative and continuously optimised for digital channels Ensure all work is aligned with digital best practices To constantly push creative boundaries by the use of cutting edge design styles and embrace new design trends into their work Work in a mobile first mind set Be an innovative and creative thinker who constantly pushes their work and their colleagues to embrace new and emerging design trends What you need to be great in this role: Minimum of 3 years combined experience as a digital designer Basic motion graphics and video editing skills are also important Experience in producing e-commerce assets, digital and social media content and can demonstrate this within a wide and varied portfolio Has delivered B2C digital collateral in a fast paced and iterative environment Has an excellent understanding of the technical constraints and design regulations to which e-commerce, user interface designs and social media assets must adhere to Is able to comfortably work as part of an onsite team Have the confidence to present a clear design rationale with their work Be a very ‘creative’ and confident person who will actively participate in brainstorming sessions to drive digital excellence across the existing team Possess knowledge of interaction design, mobile design, user-centred experience design, brand development, digital marketing and the creative process Produce work that follows brand guidelines Good verbal and written English communication and presentation skills Work well under pressure and manage workloads effectively Highly self-motivated and well organised Have an eye for detail A keen interest in the latest digital trends Creative agency experience Passion for and inquisitive about AI and new technologies Understanding and knowledge of AI tools is beneficial, but ability to learn and digest benefits and features of AI tools is critical Req ID: 14100 Our values shape everything we do: Be Ambitious to succeed Be Imaginative to push the boundaries of what’s possible Be Inspirational to do groundbreaking work Be always learning and listening to understand Be Results-focused to exceed expectations Be actively pro-inclusive and anti-racist across our community, clients and creations OLIVER, a part of the Brandtech Group, is an equal opportunity employer committed to creating an inclusive working environment where all employees are encouraged to reach their full potential, and individual differences are valued and respected. All applicants shall be considered for employment without regard to race, ethnicity, religion, gender, sexual orientation, gender identity, age, neurodivergence, disability status, or any other characteristic protected by local laws. OLIVER has set ambitious environmental goals around sustainability, with science-based emissions reduction targets. Collectively, we work towards our mission, embedding sustainability into every department and through every stage of the project lifecycle.

Posted 3 days ago

Apply

5.0 - 7.0 years

0 Lacs

Andhra Pradesh, India

On-site

Job Description / Responsibilities - 5-7 years of experience in Big Data stacks: Spark/Scala/Hive/Impala/Hadoop Strong Expertise in Scala The resource should have good hands-on experience in Scala programming language . Should be able to model the given problem statement using Object Oriented programming concepts. Should have the basic understanding of the Spark in-memory processing framework and the concept of map tasks and reduce tasks. Should have hands-on experience on data processing projects. Should be able to frame sqls and analyze data based on the given requirements Advanced SQL knowledge Git hub or bit bucket Primary Skill Spark Scala. The resource should have good hands-on experience in Scala programming language Secondary Skill SQL, Python, Hive, Impala, AWS

Posted 4 days ago

Apply

30.0 years

0 Lacs

Greater Delhi Area

Remote

ABOUT WIN: Founded in 1993, WIN is a highly innovative PropTech company building new and disruptive software platforms and products for the massive real estate industry. We have the stability and brand recognition as a 30-year old company along with the curiosity and energy of a start-up. We are an Entrepreneur 500 company, Entrepreneur Fastest Growing company, and recognized as the Most Innovative Home Services Company . If you’re looking for a boring job description, this is not for you! Do you practically live on social media? Are you the first to catch every trend and know exactly what makes content tick? If you crave a fast-paced, creative environment where your ideas can truly shine, then this Social Media Specialist role is your calling! You’ll collaborate with a talented team of content writers and video editors who will help bring your concepts to life. This is a remote role with working hours from 3:00 PM to 1:00 AM IST, so you'll be right in sync with the pulse of global trends as they happen. This isn’t just another job. This is your chance to drive conversations, shape a brand’s voice, and actually see your strategies come to life in real time. What you will do: Be the Social Media Mastermind: Plan, execute, and optimize social media strategies that make noise, the good kind. Ride the Trend Wave: Spot trends before they peak and turn them into fresh, engaging content that resonates. Create Without Limits: From reels to carousels, memes to long-form posts, your playground is wide open to spark curiosity, start conversations, or inspire action. Collaborate with Creatives: Work hand-in-hand with content writers and video editors to build compelling narratives and scroll-stopping visuals. Think Strategically: Analyze metrics, track engagement, and refine your approach to boost performance. We’d Love You On Our Team If You’re: A social media savant with a sixth sense for what works online A creative soul who’s not afraid to break the mold A strategic thinker who gets both the big picture and the nitty-gritty A collaborator who thrives in a fast-moving, team-first environment Experienced (3–5 years) and have hands-on expertise managing brand social media across platforms (Instagram, LinkedIn, X, Threads, YouTube Shorts, etc.) Perks? You Bet. 100% remote, work from wherever you feel most inspired Creative freedom + a team that actually listens A role that grows with you, not boxes you in Align your work with U.S. and global time zones (3:00 PM – 1:00 AM IST) Ready to make scroll-stopping content? Apply now and bring your A-game, we can’t wait to see what you create.

Posted 4 days ago

Apply

3.0 - 5.0 years

5 - 7 Lacs

Hyderabad

Work from Office

Key Responsibilities: Design, develop, and maintain large-scale data processing workflows using big data technologies Develop ETL/ELT pipelines to ingest, clean, transform, and aggregate data from various sources Work with distributed computing frameworks such as Apache Hadoop , Spark , Flink , or Kafka Optimize data pipelines for performance, reliability, and scalability Collaborate with data scientists, analysts, and engineers to support data-driven projects Implement data quality checks and validation mechanisms Monitor and troubleshoot data processing jobs and infrastructure Document data workflows, architecture, and processes for team collaboration and future maintenance

Posted 4 days ago

Apply

2.0 - 7.0 years

4 - 9 Lacs

Mumbai

Work from Office

We are seeking a skilled Data Engineer with strong experience in PySpark, Python, Databricks , and SQL . The ideal candidate will be responsible for designing and developing scalable data pipelines and processing frameworks using Spark technologies. Key Responsibilities: Develop and optimize data pipelines using PySpark and Databricks Implement batch and streaming data processing solutions Collaborate with data scientists, analysts, and business stakeholders to gather data requirements Work with large datasets to perform data transformations Write efficient, maintainable, and well-documented PySpark code Use SQL for data extraction, transformation, and reporting tasks Monitor data workflows and troubleshoot performance issues on Spark platforms Ensure data quality, integrity, and security across systems Required Skills: 2+ years of hands-on experience with Databricks 4+ years of experience with PySpark and Python Strong knowledge of Apache Spark ecosystem and its architecture Proficiency in writing complex SQL queries ( 3+ years ) Experience in handling large-scale data processing and distributed systems Good understanding of data warehousing concepts and ETL pipelines Preferred Qualifications: Experience with cloud platforms like Azure Familiarity with data lakes and data lakehouse architecture Exposure to CI/CD and DevOps practices in data engineering projects is an added advantage

Posted 4 days ago

Apply

2.0 - 7.0 years

4 - 9 Lacs

Mumbai

Work from Office

We are seeking a skilled Data Engineer with strong experience in PySpark, Python, Databricks , and SQL . The ideal candidate will be responsible for designing and developing scalable data pipelines and processing frameworks using Spark technologies. Key Responsibilities: Develop and optimize data pipelines using PySpark and Databricks Implement batch and streaming data processing solutions Collaborate with data scientists, analysts, and business stakeholders to gather data requirements Work with large datasets to perform data transformations Write efficient, maintainable, and well-documented PySpark code Use SQL for data extraction, transformation, and reporting tasks Monitor data workflows and troubleshoot performance issues on Spark platforms Ensure data quality, integrity, and security across systems Required Skills: 2+ years of hands-on experience with Databricks 4+ years of experience with PySpark and Python Strong knowledge of Apache Spark ecosystem and its architecture Proficiency in writing complex SQL queries ( 3+ years ) Experience in handling large-scale data processing and distributed systems Good understanding of data warehousing concepts and ETL pipelines Preferred Qualifications: Experience with cloud platforms like Azure Familiarity with data lakes and data lakehouse architecture Exposure to CI/CD and DevOps practices in data engineering projects is an added advantage

Posted 4 days ago

Apply

2.0 - 7.0 years

4 - 9 Lacs

Mumbai

Work from Office

We are seeking a skilled Data Engineer with strong experience in PySpark, Python, Databricks , and SQL . The ideal candidate will be responsible for designing and developing scalable data pipelines and processing frameworks using Spark technologies. Key Responsibilities: Develop and optimize data pipelines using PySpark and Databricks Implement batch and streaming data processing solutions Collaborate with data scientists, analysts, and business stakeholders to gather data requirements Work with large datasets to perform data transformations Write efficient, maintainable, and well-documented PySpark code Use SQL for data extraction, transformation, and reporting tasks Monitor data workflows and troubleshoot performance issues on Spark platforms Ensure data quality, integrity, and security across systems Required Skills: 2+ years of hands-on experience with Databricks 4+ years of experience with PySpark and Python Strong knowledge of Apache Spark ecosystem and its architecture Proficiency in writing complex SQL queries ( 3+ years ) Experience in handling large-scale data processing and distributed systems Good understanding of data warehousing concepts and ETL pipelines Preferred Qualifications: Experience with cloud platforms like Azure Familiarity with data lakes and data lakehouse architecture Exposure to CI/CD and DevOps practices in data engineering projects is an added advantage

Posted 4 days ago

Apply

12.0 - 19.0 years

17 - 32 Lacs

Pune

Work from Office

Must have: Python, Spark, and AWS. Good at problem solving, well-versed with overall project architecture and Hands-on Coding exp. Required Skills: Proficiency in multiple programming languages - ideally Python Proficiency in at least one cluster computing frameworks (preferably Spark, alternatively Flink or Storm) Proficiency in at least one cloud data lakehouse platforms (preferably AWS data lake services or Databricks, alternatively Hadoop), at least one relational data stores (Postgres, Oracle or similar) and at least one NOSQL data stores (Cassandra, Dynamo, MongoDB or similar) Proficiency in at least one scheduling/orchestration tools (preferably Airflow, alternatively AWS Step Functions or similar) Proficiency with data structures, data serialization formats (JSON, AVRO, Protobuf, or similar), big-data storage formats (Parquet, Iceberg, or similar), data processing methodologies (batch, micro-batching, and stream), one or more data modelling techniques (Dimensional, Data Vault, Kimball, Inmon, etc.), Agile methodology (develop PI plans and roadmaps), TDD (or BDD) and CI/CD tools (Jenkins, Git,) Strong organizational, problem-solving, and critical thinking skills; Strong documentation skills Preferred skills: Proficiency in IaC (preferably Terraform, alternatively AWS cloud formation)

Posted 4 days ago

Apply

5.0 - 8.0 years

5 - 8 Lacs

Bengaluru

Work from Office

Skills desired: Strong at SQL (Multi pyramid SQL joins) Python skills (FastAPI or flask framework) PySpark Commitment to work in overlapping hours GCP knowledge(BQ, DataProc and Dataflow) Amex experience is preferred(Not Mandatory) Power BI preferred (Not Mandatory) Flask, Pyspark, Python, Sql

Posted 4 days ago

Apply

3.0 - 5.0 years

3 - 6 Lacs

Chennai

Work from Office

About the Role: We are looking for a highly skilled and motivated Data Engineer to join our clients team. You will play a key role in designing, implementing, and optimizing data architectures and pipelines to support scalable data solutions for our business. Qualifications: 3-5 years of experience in data engineering, with a focus on building and managing data pipelines. Strong proficiency in relational databases (e.g., MySQL, PostgreSQL) and NoSQL databases (e.g., MongoDB, Cassandra). Experience in building data pipelines with data warehouses like Snowflake , Redshift Experience in processing unstructured data stored from S3 using Athena , Glue etc. Hands-on experience with Kafka for real-time data streaming and messaging. Solid understanding of ETL processes , data integration, and data pipeline optimization. Proficiency in programming languages like Python , Java , or Scala for data processing. Experience with Apache Spark for big data processing and analytics is an advantage Familiarity with cloud platforms like AWS , GCP , or Azure for data infrastructure is a plus. Excellent problem-solving skills and attention to detail. Strong communication and collaboration skills, with the ability to work effectively in a team environment. Key Responsibilities: Design, build, and maintain efficient and scalable data pipelines to support data integration and transformation across various sources. Work with relational databases (e.g., MySQL, PostgreSQL, etc.) and NoSQL databases (e.g., MongoDB, Cassandra, etc.) to manage and optimize large datasets. Utilize Apache Spark for distributed data processing and real-time analytics. Implement and manage Kafka for data streaming and real-time data integration between systems.Collaborate with cross-functional teams to gather and translate business requirements into technical solutions. Monitor and optimize the performance of data pipelines and architectures, ensuring high availability and reliability. Ensure data quality, consistency, and integrity across all systems. Stay up-to-date with the latest trends and best practices in data engineering and big data technologies.

Posted 4 days ago

Apply

2.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Role : Data Scientist II About Media.net : Media.net is a leading, global ad tech company that focuses on creating the most transparent and efficient path for advertiser budgets to become publisher revenue. Our proprietary contextual technology is at the forefront of enhancing Programmatic buying, the latest industry-standard in ad buying for digital platforms. The Media.net platform powers major global publishers and ad-tech businesses at scale across ad formats like display, video, mobile, native, as well as search. Media.net’s U.S. HQ is based in New York, and the Global HQ is in Dubai. With office locations and consultant partners across the world, Media.net takes pride in the value-add it offers to its 50+ demand and 21K+ publisher partners, in terms of both products and services. Data Science is at the heart of Media.net. The team uses advanced statistical and machine learning and deep learning models, large scale distributed computing along with tools from mathematics, economics, auction theory to build solutions that enable us to match users with relevant ads in the most optimal way thereby maximizing revenue for our customers and for Media.net. Some of the challenges the team deals with: How do you use information retrieval, machine learning models to estimate click through rate and revenue given the information regarding the position of the slot, user device, location and content of the page. How do you scale the same for thousands of domains, millions of urls? How do you match ads to page views considering contextual information? How do you design learning mechanisms to continuously learn from user feedback in the form of clicks and conversions? How do you deal with the extremely sparse data? What do you do for new ads and new pages? How do we design better explore-exploit frameworks? How do you design learning algorithms that are fast and scalable? How do you combine contextual targeting with behavioral user-based targeting? How do you establish a unique user identity based on multiple noisy signals so that behavioral targeting is accurate? Can you use NLP to find more genetic trends based on the content of the page and as? What is in it for you? Understand business requirements, analyze and extract relevant information from large amounts of historical data. Use your knowledge of Information retrieval, NLP, Machine Learning (including Deep Learning) to build prototype solutions keeping scale, speed and accuracy in mind. Work with engineering teams to implement the prototype. Work with engineers to design appropriate model performance metrics and create reports to track the same. Work with the engineering teams to identify areas of improvement, jointly develop research agenda and execute on the same using cutting edge algorithms and tools. You will need to understand a broad range of ML algorithms and appreciation on how to apply them to complex practical problems. You will also need to have enough theoretical background and a good grasp of algorithms to be able to critically evaluate existing ML algorithms and be creative when there is a need to go beyond textbook solutions. Who should apply for this role ? PhD/Research Degree or BS/MS in Computer Science, Statistics, Artificial Intelligence, Machine learning, Operations Research or related field. 2- 4 years of experience in building Machine Learning/AI/Information Retrieval models Extensive knowledge and practical experience in machine learning, data mining, artificial intelligence, statistics. Understanding of supervised and unsupervised algorithms including but not limited to linear models, decision trees, random forests, gradient boosting machines etc. Excellent analytical and problem-solving abilities. Good knowledge of scientific programming in Python. Experience with Apache Spark is desired. Excellent verbal & written communication skills Bonus Points: Publications or presentation in recognized Machine Learning and Data Mining journals/conferences such as ICML Knowledge in several of the following: Math/math modeling, decision theory, fuzzy logic, Bayesian techniques, optimization techniques, statistical analysis of data, information retrieval, natural language processing, large scale data processing and data mining Ability deal with ambiguity & break them down into research problems Strong theoretical and research acumen

Posted 4 days ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies