Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
5.0 - 10.0 years
7 - 17 Lacs
Hyderabad
Work from Office
About this role: Wells Fargo is seeking a Lead data Engineer In this role, you will: Lead complex initiatives with broad impact and act as key participant in large scale software planning for the Technology area Design, develop, and run tooling to discover problems in data and applications and report the issues to engineering and product leadership Review and analyze complex software enhancement initiatives for business, operational or technical improvements that require in depth evaluation of multiple factors including intangibles or unprecedented factors Make decisions in complex and multi-faceted data engineering situations requiring understanding of software package options and programming language and compliance requirements that influence and lead Technology to meet deliverables and drive organizational change Strategically collaborate and consult with internal partners to resolve highly risky data engineering challenges Required Qualifications: 5+ years of Database Engineering experience, or equivalent demonstrated through one or a combination of the following: work experience, training, military experience, education Desired Qualifications: Need expertise on Informatica power center 10.2, Oracle, Unix and Autosys. Job Expectations: Able to work individually and work along with the US counter part. Able to handle production issues at data level. Able to create new jobs and schedule them with in the time limit should follow agile JIRA process and adhere ti the JIRA standards Should handle the CR effectively and get the job to production.
Posted 1 day ago
4.0 - 8.0 years
6 - 10 Lacs
Gurugram
Work from Office
About the Role: Grade Level (for internal use): 10 Position summary Our proprietary software-as-a-service helps automotive dealerships and sales teams better understand and predict exactly which customers are ready to buy, the reasons why, and the key offers and incentives most likely to close the sale. Its micro-marketing engine then delivers the right message at the right time to those customers, ensuring higher conversion rates and a stronger ROI. What You'll Do You will be part of our Data Platform & Product Insights data engineering team. As part of this agile team, you will work in our cloud native environment to Build & support data ingestion and processing pipelines in cloud. This will entail extraction, load and transformation of big data from a wide variety of sources, both batch & streaming, using latest data frameworks and technologies Partner with product team to assemble large, complex data sets that meet functional and non-functional business requirements, ensure build out of Data Dictionaries/Data Catalogue and detailed documentation and knowledge around these data assets, metrics and KPIs. Warehouse this data, build data marts, data aggregations, metrics, KPIs, business logic that leads to actionable insights into our product efficacy, marketing platform, customer behaviour, retention etc. Build real-time monitoring dashboards and alerting systems. Coach and mentor other team members. Who you are 4 to 8 years of experience in Big Data and Data Engineering. Strong knowledge of advanced SQL, data warehousing concepts and DataMart designing. Have strong programming skills in SQL, Python/ PySpark etc. Experience in design and development of data pipeline, ETL/ELT process on-premises/cloud. Experience in one of the Cloud providers GCP, Azure, AWS. Experience with relational SQL and NoSQL databases, including Postgres and MongoDB. Experience workflow management toolsAirflow, AWS data pipeline, Google Cloud Composer etc. Experience with Distributed Versioning Control environments such as GIT, Azure DevOps Building Docker images and fetch/promote and deploy to Production. Integrate Docker container orchestration framework using Kubernetes by creating pods, config Maps, deployments using terraform. Should be able to convert business queries into technical documentation. Strong problem solving and communication skills. Bachelors or an advanced degree in Computer Science or related engineering discipline. Good to have some exposure to Exposure to any Business Intelligence (BI) tools like Tableau, Dundas, Power BI etc. Agile software development methodologies. Working in multi-functional, multi-location teams Grade09 / 10 LocationGurugram Hybrid Modeltwice a week work from office Shift Time12 pm to 9 pm IST What You'll Love Do ask us about these! Total Rewards. Monetary, beneficial and developmental rewards! Work Life Balance. You can't do a good job if your job is all you do! Prepare for the Future. Academy we are all learners; we are all teachers! Employee Assistance Program. Confidential and Professional Counselling and Consulting. Diversity & Inclusion. HeForShe! Internal Mobility. Grow with us! About automotiveMastermind Who we are: Founded in 2012, automotiveMastermind is a leading provider of predictive analytics and marketing automation solutions for the automotive industry and believes that technology can transform data, revealing key customer insights to accurately predict automotive sales. Through its proprietary automated sales and marketing platform, Mastermind, the company empowers dealers to close more deals by predicting future buyers and consistently marketing to them. automotiveMastermind is headquartered in New York City. For more information, visit automotivemastermind.com. At automotiveMastermind, we thrive on high energy at high speed. Were an organization in hyper-growth mode and have a fast-paced culture to match. Our highly engaged teams feel passionately about both our product and our people. This passion is what continues to motivate and challenge our teams to be best-in-class. Our cultural values of Drive and Help have been at the core of what we do, and how we have built our culture through the years. This cultural framework inspires a passion for success while collaborating to win. What we do: Through our proprietary automated sales and marketing platform, Mastermind, we empower dealers to close more deals by predicting future buyers and consistently marketing to them. In short, we help automotive dealerships generate success in their loyalty, service, and conquest portfolios through a combination of turnkey predictive analytics, proactive marketing, and dedicated consultative services. Whats In It For You Our Purpose: Progress is not a self-starter. It requires a catalyst to be set in motion. Information, imagination, people, technologythe right combination can unlock possibility and change the world.Our world is in transition and getting more complex by the day. We push past expected observations and seek out new levels of understanding so that we can help companies, governments and individuals make an impact on tomorrow. At S&P Global we transform data into Essential Intelligence, pinpointing risks and opening possibilities. We Accelerate Progress. Our People: Our Values: Integrity, Discovery, Partnership At S&P Global, we focus on Powering Global Markets. Throughout our history, the world's leading organizations have relied on us for the Essential Intelligence they need to make confident decisions about the road ahead. We start with a foundation of integrity in all we do, bring a spirit of discovery to our work, and collaborate in close partnership with each other and our customers to achieve shared goals. Benefits: We take care of you, so you cantake care of business. We care about our people. Thats why we provide everything youand your careerneed to thrive at S&P Global. Health & WellnessHealth care coverage designed for the mind and body. Continuous LearningAccess a wealth of resources to grow your career and learn valuable new skills. Invest in Your FutureSecure your financial future through competitive pay, retirement planning, a continuing education program with a company-matched student loan contribution, and financial wellness programs. Family Friendly PerksIts not just about you. S&P Global has perks for your partners and little ones, too, with some best-in class benefits for families. Beyond the BasicsFrom retail discounts to referral incentive awardssmall perks can make a big difference. For more information on benefits by country visithttps://spgbenefits.com/benefit-summaries Global Hiring and Opportunity at S&P Global: At S&P Global, we are committed to fostering a connected andengaged workplace where all individuals have access to opportunities based on their skills, experience, and contributions. Our hiring practices emphasize fairness, transparency, and merit, ensuring that we attract and retain top talent. By valuing different perspectives and promoting a culture of respect and collaboration, we drive innovation and power global markets. Recruitment Fraud Alert If you receive an email from a spglobalind.com domain or any other regionally based domains, it is a scam and should be reported to reportfraud@spglobal.com. S&P Global never requires any candidate to pay money for job applications, interviews, offer letters, pre-employment training or for equipment/delivery of equipment. Stay informed and protect yourself from recruitment fraud by reviewing our guidelines, fraudulent domains, and how to report suspicious activity here. ----------------------------------------------------------- Equal Opportunity Employer S&P Global is an equal opportunity employer and all qualified candidates will receive consideration for employment without regard to race/ethnicity, color, religion, sex, sexual orientation, gender identity, national origin, age, disability, marital status, military veteran status, unemployment status, or any other status protected by law. Only electronic job submissions will be considered for employment. If you need an accommodation during the application process due to a disability, please send an email to EEO.Compliance@spglobal.com and your request will be forwarded to the appropriate person. US Candidates Only The EEO is the Law Poster http://www.dol.gov/ofccp/regs/compliance/posters/pdf/eeopost.pdf describes discrimination protections under federal law. Pay Transparency Nondiscrimination Provision - https://www.dol.gov/sites/dolgov/files/ofccp/pdf/pay-transp_%20English_formattedESQA508c.pdf ----------------------------------------------------------- , SWP Priority Ratings - (Strategic Workforce Planning)
Posted 1 day ago
4.0 - 8.0 years
6 - 12 Lacs
Bengaluru
Work from Office
We are seeking an experienced Data Engineer to join our dynamic product development team. In this role, you will be responsible for designing, building, and optimizing data pipelines that ensure efficient data processing and insightful analytics. You will work collaboratively with cross-functional teams, including data scientists, software developers, and product managers, to transform raw data into actionable insights while adhering to best practices in data architecture, security, and scalability. Role & responsibilities * Design, build, and maintain scalable ETL processes to ingest, process, and store large datasets. * Collaborate with cross-functional teams to integrate data from various sources, ensuring data consistency and quality. * Leverage Microsoft Azure services for data storage, processing, and analytics, integrating with our CI/CD pipeline on Azure Repos. * Continuously optimize data workflows for performance and scalability, identifying bottlenecks and implementing improvements. * Deploy and monitor ML/GenAI models in production environments. * Develop and enforce data quality standards, data validation checks, and ensure compliance with security and privacy policies. * Work closely with backend developers (PHP/Node/Python) and DevOps teams to support seamless data operations and deployment. * Stay current with industry trends and emerging technologies to continually enhance data strategies and methodologies. Required Skills & Qualifications * Minimum of 4+ years in data engineering or a related field. * In depth understanding of streaming technologies like Kafka, Spark Streaming. * Strong proficiency in SQL, Python, Spark SQL - data manipulation, data processing, and automation. * Solid understanding of ETL/ELT frameworks, data pipeline design, data modelling, data warehousing and data governance principles. * Must have in-depth knowledge of performance tuning/optimizing data processing jobs, debugging time consuming jobs. * Proficient in Azure technologies like ADB, ADF, SQL (capability of writing complex SQL queries), PySpark, Python, Synapse, Fabric, Delta Tables, Unity CatLog. * Deep understanding of cloud platforms (e.g., AWS, Azure, Google Cloud) and data warehousing solutions (e.g., Snowflake, Redshift, Big Query). * Good knowledge of Agile, SDLC/CICD practices and tools with a good understanding of distributed systems. * Proven ability to work effectively in agile/scrum teams, collaborating across disciplines. * Excellent analytical, troubleshooting, problem-solving skills and attention to detail. Preferred candidate profile * Experience with NoSQL databases and big data processing frameworks e.g., Apache Spark. * Knowledge of data visualization and reporting tools. * Strong understanding of data security, governance, and compliance best practices. * Effective communication skills with an ability to translate technical concepts to non-technical stakeholders. * Knowledge of AI-OPS and LLM Data pipelines. Why Join GenXAI? * Innovative Environment: Work on transformative projects in a forward-thinking, collaborative setting. * Career Growth: Opportunities for professional development and advancement within a rapidly growing company. * Cutting-Edge Tools: Gain hands-on experience with industry-leading technologies and cloud platforms. * Collaborative Culture: Join a diverse team where your expertise is valued, and your ideas make an impact.
Posted 1 day ago
7.0 - 12.0 years
6 - 10 Lacs
Noida, Bengaluru
Work from Office
About the Role: Grade Level (for internal use): 10 Responsibilities: To work closely with various stakeholders to collect, clean, model and visualise datasets. To create data driven insights by researching, designing and implementing ML models to deliver insights and implement action-oriented solutions to complex business problems To drive ground-breaking ML technology within the Modelling and Data Science team. To extract hidden value insights and enrich accuracy of the datasets. To leverage technology and automate workflows creating modernized operational processes aligning with the team strategy. To understand, implement, manage, and maintain analytical solutions & techniques independently. To collaborate and coordinate with Data, content and modelling teams and provide analytical assistance of various commodity datasets To drive and maintain high quality processes and delivering projects in collaborative Agile team environments. : 7+ years of programming experience particularly in Python 4+ years of experience working with SQL or NoSQL databases. 1+ years of experience working with Pyspark. University degree in Computer Science, Engineering, Mathematics, or related disciplines. Strong understanding of big data technologies such as Hadoop, Spark, or Kafka. Demonstrated ability to design and implement end-to-end scalable and performant data pipelines. Experience with workflow management platforms like Airflow. Strong analytical and problem-solving skills. Ability to collaborate and communicate effectively with both technical and non-technical stakeholders. Experience building solutions and working in the Agile working environment Experience working with git or other source control tools Strong understanding of Object-Oriented Programming (OOP) principles and design patterns. Knowledge of clean code practices and the ability to write well-documented, modular, and reusable code. Strong focus on performance optimization and writing efficient, scalable code. Nice to have: Experience working with Oil, gas and energy markets Experience working with BI Visualization applications (e.g. Tableau, Power BI) Understanding of cloud-based services, preferably AWS Experience working with Unified analytics platforms like Databricks Experience with deep learning and related toolkitsTensorflow, PyTorch, Keras, etc. About S&P Global Commodity Insights At S&P Global Commodity Insights, our complete view of global energy and commodities markets enables our customers to make decisions with conviction and create long-term, sustainable value. Were a trusted connector that brings together thought leaders, market participants, governments, and regulators to co-create solutions that lead to progress. Vital to navigating Energy Transition, S&P Global Commodity Insights coverage includes oil and gas, power, chemicals, metals, agriculture and shipping. S&P Global Commodity Insights is a division of S&P Global (NYSESPGI). S&P Global is the worlds foremost provider of credit ratings, benchmarks, analytics and workflow solutions in the global capital, commodity and automotive markets. With every one of our offerings, we help many of the worlds leading organizations navigate the economic landscape so they can plan for tomorrow, today.For more information, visit http://www.spglobal.com/commodity-insights . Whats In It For You Our Purpose: Progress is not a self-starter. It requires a catalyst to be set in motion. Information, imagination, people, technologythe right combination can unlock possibility and change the world.Our world is in transition and getting more complex by the day. We push past expected observations and seek out new levels of understanding so that we can help companies, governments and individuals make an impact on tomorrow. At S&P Global we transform data into Essential Intelligence, pinpointing risks and opening possibilities. We Accelerate Progress. Our People: Our Values: Integrity, Discovery, Partnership At S&P Global, we focus on Powering Global Markets. Throughout our history, the world's leading organizations have relied on us for the Essential Intelligence they need to make confident decisions about the road ahead. We start with a foundation of integrity in all we do, bring a spirit of discovery to our work, and collaborate in close partnership with each other and our customers to achieve shared goals. Benefits: We take care of you, so you cantake care of business. We care about our people. Thats why we provide everything youand your careerneed to thrive at S&P Global. Health & WellnessHealth care coverage designed for the mind and body. Continuous LearningAccess a wealth of resources to grow your career and learn valuable new skills. Invest in Your FutureSecure your financial future through competitive pay, retirement planning, a continuing education program with a company-matched student loan contribution, and financial wellness programs. Family Friendly PerksIts not just about you. S&P Global has perks for your partners and little ones, too, with some best-in class benefits for families. Beyond the BasicsFrom retail discounts to referral incentive awardssmall perks can make a big difference. For more information on benefits by country visithttps://spgbenefits.com/benefit-summaries Global Hiring and Opportunity at S&P Global: At S&P Global, we are committed to fostering a connected andengaged workplace where all individuals have access to opportunities based on their skills, experience, and contributions. Our hiring practices emphasize fairness, transparency, and merit, ensuring that we attract and retain top talent. By valuing different perspectives and promoting a culture of respect and collaboration, we drive innovation and power global markets. Recruitment Fraud Alert If you receive an email from a spglobalind.com domain or any other regionally based domains, it is a scam and should be reported to reportfraud@spglobal.com. S&P Global never requires any candidate to pay money for job applications, interviews, offer letters, pre-employment training or for equipment/delivery of equipment. Stay informed and protect yourself from recruitment fraud by reviewing our guidelines, fraudulent domains, and how to report suspicious activity here. ----------------------------------------------------------- Equal Opportunity Employer S&P Global is an equal opportunity employer and all qualified candidates will receive consideration for employment without regard to race/ethnicity, color, religion, sex, sexual orientation, gender identity, national origin, age, disability, marital status, military veteran status, unemployment status, or any other status protected by law. Only electronic job submissions will be considered for employment. If you need an accommodation during the application process due to a disability, please send an email to EEO.Compliance@spglobal.com and your request will be forwarded to the appropriate person. US Candidates Only The EEO is the Law Poster http://www.dol.gov/ofccp/regs/compliance/posters/pdf/eeopost.pdf describes discrimination protections under federal law. Pay Transparency Nondiscrimination Provision - https://www.dol.gov/sites/dolgov/files/ofccp/pdf/pay-transp_%20English_formattedESQA508c.pdf ----------------------------------------------------------- IFTECH202.1 - Middle Professional Tier I (EEO Job Group) Location - Bengaluru,Noida,Uttarpradesh,Hyderabad
Posted 1 day ago
5.0 - 8.0 years
8 - 12 Lacs
Bengaluru
Work from Office
Must have minimum experience of 5 years working on developing data pipelines Must have worked on data engineering using Python for at least 5 years Must have experience working on big data technologies for at least 5 years Must have experience in exploring and proposing solutions and come up with architecture and technical design Must have experience working on Docker Must have experience working on AWS Must have working experience on DevOps pipelines Must be a excellent technical leader who can take responsibility of a team and own it Must have experince practicing agile methods - scrum and kanban Must have experience interacting with US customers on day to day basis
Posted 1 day ago
4.0 - 8.0 years
7 - 11 Lacs
Bengaluru
Work from Office
As a Fortune 50 company with more than 400,000 team members worldwide, Target is an iconic brand and one of America's leading retailers. Joining Target means promoting a culture of mutual care and respect and striving to make the most meaningful and positive impact. Becoming a Target team member means joining a community that values different voices and lifts each other up . Here, we believe your unique perspective is important, and you'll build relationships by being authentic and respectful. Overview about TII: At Target, we have a timeless purpose and a proven strategy. And that hasnt happened by accident. Some of the best minds from different backgrounds come together at Target to redefine retail in an inclusive learning environment that values people and delivers world-class outcomes. That winning formula is especially apparent in Bengaluru, where Target in India operates as a fully integrated part of Targets global team and has more than 4,000 team members supporting the companys global strategy and operations. Role overview : As a Sr Product Ops Analyst, you will be responsible for creating measurements, driving enhancements and insight generation by identifying areas of improvement in products through exploratory data analysis monitoring and tracking products and collaborating with Product Owners to identify and fix product issues. This role will be a key component in the execution of Products goals and objectives. As a Sr Product Ops Analyst, you will build critical reports for product, perform data analysis to identify new opportunities, collaborate with Data Engineering and Data Science teams on setting up new process/jobs, activate new models, own and maintain datasets and dashboards for the product. You will build and monitor the required systems that will be used by both Product team and stakeholders to keep track of key product and business metrics. You will also need to analyze key trends to identify potential product issues and improvement to the product. It is critical that you are comfortable with a test and learn mindset to achieve this. You will encourage the open exchange of information and viewpoints, as well as inspire others to achieve challenging goals and high standards of performance while committing to the organization's direction. You will foster sense of urgency to achieve goals and leverage resources to overcome unexpected obstacles. Core responsibilities of this job are described within this job description. Job duties may change at any time due to business needs. Primary function Responsible for driving enhancements and insight generation by identifying areas of improvement in products through exploratory data analysis. Monitoring and tracking products and collaborating with Product Owners to identify and fix product issues. Principal duties & responsibilities Build critical reports specific to existing data to monitor critical metrics. Perform data analysis to identify new opportunities to improve overall metrics. Owning and driving the complicated problems, breaking them down into smaller tasks and leading acting as a team lead and mentoring team members from technical standpoint. Collaborating with Data Engineering and data science teams on setting up new process/jobs, owning and maintaining the datasets & dashboards. Support in creating forecasts, designing inventory optimization strategies and leverage technology to ensure our inventory is available at the right time. Partner with cross functional teams such as IPC product teams, Inventory Management, Transportation, and S&OP to execute key production levers and supporting Middle Mile positioning to manage fulfillment network. Responsible for helping in creating and leveraging projected future product movement plans that helps on understanding how a change to key inputs like forecast, presentation and Leadtime. Responsible to build and monitor the system that Inventory managers will use to get recommended order quantities, implement automatic ordering, decide how much inventory to buy, when to bring inventory into the target network, and where to position it. Provide solutions to help resolve inventory management and supply chain operations frictions. It will be critical that you are comfortable with ambiguity and utilize a test and learn mindset with other core GSCL teams to inform and develop our broader positioning roadmap. Analyze underlying trends to identify potential product issues and improvements to the existing product features. Support key item/brand launches from fulfillment by participating in preparation activities and ensuring systems do have any critical defects. Support peak volumes by executing key tasks defined specific for peak season. Job requirements: Four-year degree with 4-8 years experience in math, statistics, business, operations research, or supply chain and/or equivalent experience in an operations research, statistics, or supply chain analysis/operations business environment. Experience with inventory planning concepts - forecasting, planning, optimization, and logistics - gained through work experience Ability to create/change/execute SQL queries from analytic datasets and analyze results Experience and passion for using data to explain or solve complex business problems and influence invention of new systematic and operational processes Ability to communicate findings clearly and concisely Experience utilizing problem solving and analytical skills Ability to adjust priorities to the ever-changing needs of the business End to end understanding of Digital fulfillment processes Desired requirements: R and Python Experience in retail (stores or headquarters) Experience as an analyst or manager in Inventory Management, Inventory Management Operations, Planning or similar. Proven experience achieving results by leading, partnering, and influencing others Aptitude and familiarity with the design and use of complex logistics and/or reporting software systems. Useful Links: Life at Targethttps://india.target.com/ Benefitshttps://india.target.com/life-at-target/workplace/benefits Culture https://india.target.com/life-at-target/belonging
Posted 1 day ago
7.0 - 12.0 years
7 - 11 Lacs
Hyderabad
Work from Office
Optum is a global organization that delivers care, aided by technology to help millions of people live healthier lives. The work you do with our team will directly improve health outcomes by connecting people with the care, pharmacy benefits, data and resources they need to feel their best. Here, you will find a culture guided by inclusion, talented peers, comprehensive benefits and career development opportunities. Come make an impact on the communities we serve as you help us advance health optimization on a global scale. Join us to start Caring. Connecting. Growing together. As part of our strategic initiative to build a centralized capability around data and cloud engineering, we are establishing a dedicated Azure Cloud Data Engineering practice. This team will be at the forefront of designing, developing, and deploying scalable data solutions on cloud primarily using Microsoft Azure platform. The practice will serve as a centralized team, driving innovation, standardization, and best practices across cloud-based data initiatives. New hires will play a pivotal role in shaping the future of our data landscape, collaborating with cross-functional teams, clients, and stakeholders to deliver impactful, end-to-end solutions. Primary Responsibilities: Ingest data from multiple on-prem and cloud data sources using various tools & capabilities in Azure Design and develop Azure Databricks processes using PySpark/Spark-SQL Design and develop orchestration jobs using ADF, Databricks Workflow Analyzing data engineering processes being developed and act as an SME to troubleshoot performance issues and suggest solutions to improve Develop and maintain CI/CD processes using Jenkins, GitHub, Github Actions etc Building test framework for the Databricks notebook jobs for automated testing before code deployment Design and build POCs to validate new ideas, tools, and architectures in Azure Continuously explore new Azure services and capabilities; assess their applicability to business needs Create detailed documentation for cloud processes, architecture, and implementation patterns Work with data & analytics team to build and deploy efficient data engineering processes and jobs on Azure cloud Prepare case studies and technical write-ups to showcase successful implementations and lessons learned Work closely with clients, business stakeholders, and internal teams to gather requirements and translate them into technical solutions using best practices and appropriate architecture Contribute to full lifecycle project implementations, from design and development to deployment and monitoring Ensure solutions adhere to security, compliance, and governance standards Monitor and optimize data pipelines and cloud resources for cost and performance efficiency Identifies solutions to non-standard requests and problems Support and maintain the self-service BI warehouse Mentor and support existing on-prem developers for cloud environment Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so Required Qualifications: Undergraduate degree or equivalent experience 7+ years of overall experience in Data & Analytics engineering 5+ years of experience working with Azure, Databricks, and ADF, Data Lake 5+ years of experience working with data platform or product using PySpark and Spark-SQL Solid experience with CICD tools such as Jenkins, GitHub, Github Actions, Maven etc. In-depth understanding of Azure architecture & ability to come up with efficient design & solutions Highly proficient in Python and SQL Proven excellent communication skills Preferred Qualifications: Snowflake, Airflow experience Power BI development experience Experience or knowledge of health care concepts – E&I, M&R, C&S LOBs, Claims, Members, Provider, Payers, Underwriting At UnitedHealth Group, our mission is to help people live healthier lives and make the health system work better for everyone. We believe everyone-of every race, gender, sexuality, age, location and income-deserves the opportunity to live their healthiest life. Today, however, there are still far too many barriers to good health which are disproportionately experienced by people of color, historically marginalized groups and those with lower incomes. We are committed to mitigating our impact on the environment and enabling and delivering equitable care that addresses health disparities and improves health outcomes – an enterprise priority reflected in our mission. #NIC External Candidate Application Internal Employee Application
Posted 1 day ago
3.0 - 6.0 years
40 - 45 Lacs
Kochi, Kolkata, Bhubaneswar
Work from Office
We are seeking experienced Data Engineers with over 3 years of experience to join our team at Intuit, through Cognizant. The selected candidates will be responsible for developing and maintaining scalable data pipelines, managing data warehousing solutions, and working with advanced cloud environments. The role requires strong technical proficiency and the ability to work onsite in Bangalore. Key Responsibilities: Design, build, and maintain data pipelines to ingest, process, and analyze large datasets using PySpark. Work on Data Warehouse and Data Lake solutions to manage structured and unstructured data. Develop and optimize complex SQL queries for data extraction and reporting. Leverage AWS cloud services such as S3, EC2, EMR, Athena, and Redshift for data storage, processing, and analytics. Collaborate with cross-functional teams to ensure the successful delivery of data solutions that meet business needs. Monitor data pipelines and troubleshoot any issues related to data integrity or system performance. Required Skills: 3 years of experience in data engineering or related fields. In-depth knowledge of Data Warehouses and Data Lakes. Proven experience in building data pipelines using PySpark. Strong expertise in SQL for data manipulation and extraction. Familiarity with AWS cloud services, including S3, EC2, EMR, Athena, Redshift, and other cloud computing platforms. Preferred Skills: Python programming experience is a plus. Experience working in Agile environments with tools like JIRA and GitHub.
Posted 1 day ago
5.0 - 10.0 years
20 - 30 Lacs
Pune
Work from Office
Role & responsibilities Update from Client - Ideally, we are looking for a 60:40 mix, with stronger capabilities on the Data Engineering side, along with working knowledge of Machine Learning and Data Science conceptsespecially those who can pick up tasks in Agentic AI, OpenAI, and related areas as required in the future.
Posted 1 day ago
5.0 - 10.0 years
11 - 15 Lacs
Hyderabad
Work from Office
Stellantis is seeking a passionate, innovative, results-oriented Information Communication Technology (ICT) Manufacturing AWS Cloud Architect to join the team. As a Cloud architect, the selected candidate will leverage business analysis, data management, and data engineering skills to develop sustainable data tools supporting Stellantiss Manufacturing Portfolio Planning. This role will collaborate closely with data analysts and business intelligence developers within the Product Development IT Data Insights team. Job responsibilities include but are not limited to Having deep expertise in the design, creation, management, and business use of large datasets, across a variety of data platforms Assembling large, complex sets of data that meet non-functional and functional business requirements Identifying, designing and implementing internal process improvements including re-designing infrastructure for greater scalability, optimizing data delivery, and automating manual processes Building required infrastructure for optimal extraction, transformation and loading of data from various data sources using AWS, cloud and other SQL technologies. Working with stakeholders to support their data infrastructure needs while assisting with data-related technical issues Maintain high-quality ontology and metadata of data systems Establish a strong relationship with the central BI/data engineering COE to ensure alignment in terms of leveraging corporate standard technologies, processes, and reusable data models Ensure data security and develop traceable procedures for user access to data systems Qualifications, Experience and Competency Education Bachelors or Masters degree in Computer Science, or related IT-focused degree Experience Essential Overall 10-15 years of IT experience Develop, automate and maintain the build of AWS components, and operating systems. Work with application and architecture teams to conduct proof of concept (POC) and implement the design in a production environment in AWS. Migrate and transform existing workloads from on premise to AWS Minimum 5 years of experience in the area of data engineering or data architectureconcepts, approach, data lakes, data extraction, data transformation Proficient in ETL optimization, designing, coding, and tuning big data processes using Apache Spark or similar technologies. Experience operating very large data warehouses or data lakes Investigate and develop new micro services and features using the latest technology stacks from AWS Self-starter with the desire and ability to quickly learn new technologies Strong interpersonal skills with ability to communicate & build relationships at all levels Hands-on experience from AWS cloud technologies like S3, AWS glue, Glue Catalog, Athena, AWS Lambda, AWS DMS, pyspark, and snowflake. Experience with building data pipelines and applications to stream and process large datasets at low latencies. Identifying, designing, and implementing internal process improvements including re-designing infrastructure for greater scalability, optimizing data delivery, and automating manual processes Desirable Familiarity with data analytics, Engineering processes and technologies Ability to work successfully within a global and cross-functional team A passion for technology. We are looking for someone who is keen to leverage their existing skills while trying new approaches and share that knowledge with others to help grow the data and analytics teams at Stellantis to their full potential! Specific Skill Requirement AWS services (GLUE, DMS, EC2, RDS, S3, VPCs and all core services, Lambda, API Gateway, Cloud Formation, Cloud watch, Route53, Athena, IAM) andSQL, Qlik sense, python/Spark, ETL optimization , If you are interested, please Share below details and Updated Resume Matched First Name Last Name Date of Birth Pass Port No and Expiry Date Alternate Contact Number Total Experience Relevant Experience Current CTC Expected CTC Current Location Preferred Location Current Organization Payroll Company Notice period Holding any offer
Posted 1 day ago
7.0 - 10.0 years
20 - 25 Lacs
Pune
Work from Office
Role & Responsibilities Architect, build, and tune Snowflake data warehouses and ELT pipelines (SQL, Streams, Tasks, UDFs, Stored Procedures) to meet complex commercial-analytics workloads. Integrate diverse pharma data sources (Veeva, Salesforce, IQVIA, Symphony, RWD, patient-services feeds) via Fivetran, ADF, or Python-based frameworks, ensuring end-to-end data quality. Establish robust data models (star, snowflake, Data Vault) optimized for sales reporting, market-share analytics, and AI/ML use-cases. Drive governance & compliance (HIPAA, GDPR, GxP) through fine-grained access controls, masking, lineage, and metadata management. Lead code reviews, mentor engineers, and resolve performance bottlenecks while right-sizing compute for cost efficiency. Partner with business stakeholders to translate commercial objectives into scalable data solutions and actionable insights. Skills & Qualifications Must-Have 7+ yrs data-engineering / warehousing experience, incl. 4+ yrs hands-on Snowflake design & development. Expertlevel SQL plus strong data-modeling (Dimensional, Data Vault) and ETL/ELT optimisation skills. Proficiency in Python (or similar) for automation, API integrations, and orchestration. Proven governance/security acumen within regulated industries (HIPAA, GDPR, PII). Bachelors in Computer Science, Engineering, Information Systems (Masters preferred). Strong client-facing communication and problem-solving ability in fast-paced, agile environments. Preferred Direct experience with pharma commercial datasets (sales, CRM, claims, MDM, adherence KPIs). Cloud-platform depth (AWS, Azure, or GCP) and familiarity with tools such as Matillion/DBT/Airflow, Git. Snowflake certifications (SnowPro Core / Advanced) plus Tableau, Power BI, or Qlik connectivity know-how.
Posted 1 day ago
12.0 - 16.0 years
14 - 20 Lacs
Pune
Work from Office
AI/ML/GenAI AWS SME Job Description Role Overview: An AWS SME with a Data Science Background is responsible for leveraging Amazon Web Services (AWS) to design, implement, and manage data-driven solutions. This role involves a combination of cloud computing expertise and data science skills to optimize and innovate business processes. Key Responsibilities: Data Analysis and Modelling: Analyzing large datasets to derive actionable insights and building predictive models using AWS services like SageMaker, Bedrock, Textract etc. Cloud Infrastructure Management: Designing, deploying, and maintaining scalable cloud infrastructure on AWS to support data science workflows. Machine Learning Implementation: Developing and deploying machine learning models using AWS ML services. Security and Compliance: Ensuring data security and compliance with industry standards and best practices. Collaboration: Working closely with cross-functional teams, including data engineers, analysts, DevOps and business stakeholders, to deliver data-driven solutions. Performance Optimization: Monitoring and optimizing the performance of data science applications and cloud infrastructure. Documentation and Reporting: Documenting processes, models, and results, and presenting findings to stakeholders. Skills & Qualifications Technical Skills: Proficiency in AWS services (e.g., EC2, S3, RDS, Lambda, SageMaker). Strong programming skills in Python. Experience with AI/ML project life cycle steps. Knowledge of machine learning algorithms and frameworks (e.g., TensorFlow, Scikit-learn). Familiarity with data pipeline tools (e.g., AWS Glue, Apache Airflow). Excellent communication and collaboration abilities.
Posted 1 day ago
14.0 - 19.0 years
16 - 20 Lacs
Pune
Work from Office
Company Overview With 80,000 customers across 150 countries, UKG is the largest U.S.-based private software company in the world. And we're only getting started. Ready to bring your bold ideas and collaborative mindset to an organization that still has so much more to build and achieveRead on. Here, we know that you're more than your work. That's why our benefits help you thrive personally and professionally, from wellness programs and tuition reimbursement to U Choose "” a customizable expense reimbursement program that can be used for more than 200+ needs that best suit you and your family, from student loan repayment, to childcare, to pet insurance. Our inclusive culture, active and engaged employee resource groups, and caring leaders value every voice and support you in doing the best work of your career. If you're passionate about our purpose "” people "”then we can't wait to support whatever gives you purpose. We're united by purpose, inspired by you. UKG is looking to hire a Software Architect to lead in designing and delivering next generation and transformational architectural designs that yield superior functional, scalable, resilient, adaptable, performant, and secure offerings and component services at UKG. They will establish the starting point for transformational architectural decisions, patterns, and practices, with revolutionary architectural outcomes of performance, scale, security, and delivery that will shape how new software architecture patterns, technologies, and best practices are adopted and measured at UKG. As a Software Architect at UKG you will: Manage and deliver new architectural transformational designs, identify architecture risks, and maintain architectural opportunities and risk assessment for all stakeholders. Serve as subject matter expert for Software Architecture practice, with the ability to provide technical leadership in areas of Software Design, Development & Delivery, Reliability, Scalability, Performance, and Security across many Software domains such as but not limited to UI, APIs, Microservices, DDD, Platform Services, Data Engineering and Public Cloud. Contribute to the technical leadership for Product Architecture group to help other software architects envision, develop, and foster the adoption of new architectural transformational designs and implementations. Serve as Technical Ambassadors of goodwill for our internal Technical Community as well as the external Tech Industry and Academia communities. Partner with Product Owners, Engineering Owners when making roadmap, design, architectural, and engineering impacting decisions. Lead initiatives to effectively communicate and present the architectural decisions and technical strategies so that development teams properly understand why the strategies need to be adopted. Lead initiatives in development of architectural significant proofs-of-concept solutions to assist product architects and development teams in accelerating the adoption of the technical strategy. Lead technical due diligence activities and third-party partnership evaluation initiatives. Serve as technical strategic advisors to research work being executed in the Development organization. : 14+ years of Software Development experience and 5+ years of Software Architecture experience as well as 5+ years of technical leadership and architecture experience in software and cloud development (ideally in SaaS) 5+ years' experience designing and delivering large scale distributed systems in a multi-tenant SaaS environment 5+ years' experience building, managing, and leading architects and technical leads Expert understanding of security, reliability, scalability, high availability, and concurrency architectural patterns and solutions. Expert in solution design across the full technology stack, including for public and hybrid cloud deployments. Expert in patterns and solutions that enable evolutionary architectures, leveraging flexibility and creativity when balancing the present technologies with emerging ones when formulating new strategies. Influential speaker and an expert in designing and delivering presentations on large stages, Prior experience with at least one major IaaS and/or PaaS technology (OpenStack, AWS, GCP, Azure, Kubernetes, Cloud Foundry, etc.) Prior experience with agile development, Continuous Delivery, DevOps, and SRE practices Proficient in at least one static OO language (Java, Scala, C#) Proficient in at least one dynamic language (JavaScript/TypeScript, Python, Node.js) Proficient in current development tools (GitHub, Gitlab, CLI, Vim, JetBrains, Xamarin, Visual Studio, Concourse.ci, CircleCI, Jenkins) Qualifications: Bachelor's or Master's degree in Computer Science, Mathematics, or Engineering is preferred Prior experience technically leading at least one vertical software design practice, in depth such as Microservices Architecture, Public Cloud Architecture, Site Reliability Architecture, Data Engineering Architecture, or Software Security Prior experience with relational and non-relational database technologies (MySQL, MongoDB, Cassandra) Prior experience with messaging and event streaming solutions (Kafka, RabbitMQ, Kafka Streams, Spark) Prior experience with Workflow (Camunda, Activiti) and iPaaS solutions (MuleSoft, Dell Boomi) is a bonus Strong understanding of infrastructure and related technologies (compute, storage, networking) Where we're going UKG is on the cusp of something truly special. Worldwide, we already hold the #1 market share position for workforce management and the #2 position for human capital management. Tens of millions of frontline workers start and end their days with our software, with billions of shifts managed annually through UKG solutions today. Yet it's our AI-powered product portfolio designed to support customers of all sizes, industries, and geographies that will propel us into an even brighter tomorrow! in the Application and Interview Process UKGCareers@ukg.com
Posted 1 day ago
12.0 - 15.0 years
16 - 20 Lacs
Bengaluru
Work from Office
As a Principal Technical Specialist, you will lead teams in technical discussions, architecture strategy, and system design for high-performance applications. You will collaborate with architects and developers to define workflows, interfaces, and domain models, leveraging SQL and NoSQL databases. Your role requires a hands-on approach, strong communication skills, and the ability to translate complex technical concepts into actionable insights. You have: Bachelors degree in Engineering with 12 to 15 years of relevant work experience. Experience with architectural patterns and programming languages such as Java, Python/Shell, and Golang. Familiarity with frameworks like Spring, Guice, or Micronaut, and libraries such as Pandas, Keras, Pytorch, Scipy, and Numpy. Experience with Kafka, Spark, and databases (SQLPostgres, Oracle; NoSQLElastic, Prometheus, Mongo, Cassandra, Redis, and Pinot). It would be nice if you also had: Experience in OLTP/real-time system management in enterprise software. Experience in both large-scale and small-scale development, with the ability to model domain-specific systems. Expertise in data engineering, statistical analytics, and AI/ML techniques (AI experience is a plus). Knowledge of NMS/EMS for the telecom domain, including Network Operational Lifecycle and Network Planning. Lead and Guide the team in technical discussions. Expertise in Java, Python/Shell, and frameworks like Spring, Kafka, and AI/ML libraries will drive innovation in telecom network management and real-time enterprise software. Analyze and decompose requirements. Work on long lead items and define architecture strategy for the application. Work with architects and developers in other applications and define interfaces and workflows. With a strong foundation in statistical analytics, data engineering and AI/ML. Communicate and present to internal and external audiences.
Posted 1 day ago
3.0 - 8.0 years
5 - 10 Lacs
Pune
Work from Office
The AI/Data Engineer will be responsible for designing, implementing, and maintaining scalable data solutions. This role will involve working with various data tools and technologies to ensure efficient data processing, integration, and visualization. Key Responsibilities: Develop and maintain data pipelines and ETL processes to ingest and transform data from multiple sources. Design, implement, and manage data models and databases using SQL. Utilize Python for data manipulation, analysis, and automation tasks. Administer and automate processes on Linux systems using shell scripting and tools like Putty. Schedule and monitor jobs using Control-M or similar scheduling tools. Create interactive dashboards and reports using Tableau and Power BI to support data-driven decision-making. Collaborate with data scientists and analysts to support AI/ML model deployment and integration. Ensure data quality, integrity, and security across all data processes. Utilize version control software, such as Git and Bitbucket, to manage and track code changes effectively. Qualifications: Bachelor’s degree in computer science, Information Technology, or a related field. At least 3 years of proven experience as a Data Engineer, AI Engineer, or similar role. Proficiency in Python and SQL.
Posted 1 day ago
3.0 - 5.0 years
13 - 17 Lacs
Bengaluru
Work from Office
Project description We are more than a diversified global financial markets infrastructure and data business. We are dedicated, open-access partners with a commitment to excellence in delivering the services our customers expect from us. With extensive experience, deep knowledge and worldwide presence across financial markets, we enable businesses and economies around the world to fund innovation, manage risk and create jobs. It's how we've contributed to supporting the financial stability and growth of communities and economies globally for more than 300 years. Through a comprehensive suite of trusted financial market infrastructure services - and our open-access model - we provide the flexibility, stability and trust that enable our customers to pursue their ambitions with confidence and clarity. We are headquartered in the United Kingdom, with significant operations in 70 countries across EMEA, North America, Latin America and Asia Pacific. We employ 25,000 people globally, more than half located in Asia Pacific. ResponsibilitiesAs a Senior Quality Assurance Engineer, you will be responsible for ensuring the quality and reliability of complex data-driven systems, with a focus on financial services applications. You will work closely with Data Engineers, Business Analysts, and Developers across global teams to validate functionality, accuracy, and performance of software solutions, particularly around data migration from on-premises to cloud platforms. Key responsibilities include: Leading and executing end-to-end test plans, including functional, unit, regression, and back-to-back testing Designing test strategies for data migration projects, with a strong focus on Oracle to Cloud transitions Verifying data accuracy and transformation logic across multiple environments Writing Python-based automated test scripts and utilities for validation Participating in Agile ceremonies, collaborating closely with cross-functional teams Proactively identifying and documenting defects, inconsistencies, and process improvements Contributing to continuous testing and integration practices Ensuring traceability between requirements, test cases, and delivered code SkillsMust have Mandatory Skills Description The ideal candidate must demonstrate strong experience ( minimum 7 Years) and hands-on expertise in the following areas: Data Testing (Oracle to Cloud Migration)Deep understanding of testing strategies related to large-scale data movement and transformation validation between legacy on-premise systems and modern cloud platforms. Python ScriptingProficient in using Python for writing automated test scripts and tools to streamline testing processes. Regression TestingProven ability to develop and manage comprehensive regression test suites ensuring consistent software performance over releases. Back-to-Back TestingExperience in comparing results between old and new systems or components to validate data integrity post-migration. Functional TestingSkilled in verifying system behavior against functional requirements in a business-critical environment. Unit TestingCapable of writing and executing unit tests for small code components to ensure correctness at the foundational level. Nice to have While not required, the following skills would be a strong plus and would enhance your effectiveness in the role: Advanced Python DevelopmentExperience in building complex QA tools or contributing to CI/CD pipelines using Python. DBT (Data Build Tool)Familiarity with DBT for transformation testing and documentation in data engineering workflows. SnowflakeExposure to Snowflake cloud data warehouse and understanding of its testing and validation mechanisms. OtherLanguagesEnglishB1 Intermediate SenioritySenior
Posted 1 day ago
4.0 - 8.0 years
14 - 19 Lacs
Bengaluru
Work from Office
Project description We are seeking a highly skilled and motivated Data Scientist with 5+ years of experience to join our team. The ideal candidate will bring strong data science, programming, and data engineering expertise, along with hands-on experience in generative AI, large language models, and modern LLM application frameworks. This role also demands excellent communication and stakeholder management skills to collaborate effectively across business units. Responsibilities We are seeking a highly skilled and motivated Data Scientist with 5+ years of experience to join our team. The ideal candidate will bring strong data science, programming, and data engineering expertise, along with hands-on experience in generative AI, large language models, and modern LLM application frameworks. This role also demands excellent communication and stakeholder management skills to collaborate effectively across business units. Skills Must have Experience5+ years of industry experience as a Data Scientist, with a proven track record of delivering impactful, data-driven solutions. Programming Skills: Advanced proficiency in Python, with extensive experience writing clean, efficient, and maintainable code. Proficiency with version control tools such as Git. Data EngineeringStrong working proficiency with SQL and distributed computing with Apache Spark. Cloud PlatformsExperience building and deploying apps on Azure Cloud. Generative AI & LLMsPractical experience with large language models (e.g., OpenAI, Anthropic, HuggingFace). Knowledge of Retrieval-Augmented Generation (RAG) techniques and prompt engineering is expected. Machine Learning & ModelingStrong grasp of statistical modeling, machine learning algorithms, and tools like scikit-learn, XGBoost, etc. Stakeholder EngagementExcellent communication skills with a demonstrated ability to interact with business stakeholders, understand their needs, present technical insights clearly, and drive alignment across teams. Tools and librariesProficiency with libraries like Pandas, NumPy, and ML lifecycle tools such as MLflow. Team CollaborationProven experience contributing to agile teams and working cross-functionally in fast-paced environments. Nice to have Hands-on experience with Databricks and Snowflake. Hands-on experience building LLM-based applications using agentic frameworks like LangChain, LangGraph, and AutoGen. Familiarity with data visualization platforms such as Power BI, Tableau, or Plotly. Front-end/Full stack development experience. Exposure to MLOps practices and model deployment pipelines in production. OtherLanguagesEnglishC2 Proficient SeniorityRegular
Posted 1 day ago
3.0 - 6.0 years
11 - 15 Lacs
Bengaluru
Work from Office
Project description A DevOps Support Engineer will perform tasks related to data pipeline work and monitor and support related to job execution, data movement, and on-call support. In addition, deployed pipeline implementations will be tested for production validation. Responsibilities Provide production support for 1st tier, after hours and on call support. The candidate will eventually develop into more data engineering within the Network Operations team. The selected resource will learn the Telecommunications domain while also developing data learning skills. Skills Must have ETL pipeline, data engineering, data movement/monitoring Azure Databricks Watchtower Automation tools Testing Nice to have Data Engineering Other Languages EnglishC2 Proficient Seniority Regular
Posted 1 day ago
7.0 - 12.0 years
11 - 15 Lacs
Gurugram
Work from Office
Project description We are looking for an experienced Data Engineer to contribute to the design, development, and maintenance of our database systems. This role will work closely with our software development and IT teams to ensure the effective implementation and management of database solutions that align with client's business objectives. Responsibilities The successful candidate would be responsible for managing technology in projects and providing technical guidance/solutions for work completion: (1.) To be responsible for providing technical guidance/solutions (2.) To ensure process compliance in the assigned module and participate in technical discussions/reviews (3.) To prepare and submit status reports for minimizing exposure and risks on the project or closure of escalations (4.) Being self-organized, focused on develop on time and quality software Skills Must have At least 7 years of experience in development in Data specific projects. Must have working knowledge of streaming data Kafka Framework (kSQL/Mirror Maker etc) Strong programming skills in at least one of these programming language Groovy/Java Good knowledge of Data Structure, ETL Design, and storage. Must have worked in streaming data environments and pipelines Experience working in near real-time/Streaming Data pipeline development using Apache Spark/Streamsets/ Apache NIFI or similar frameworks Nice to have N/A Other Languages EnglishB2 Upper Intermediate Seniority Senior
Posted 1 day ago
1.0 - 4.0 years
6 - 10 Lacs
Pune
Work from Office
To ensure successful initiation, planning, execution, control and completion of the project by guiding team members on technical aspects, conducting reviews of technical documents and artefacts. Lead project development, production support and maintenance activities. Fill and ensure timesheets are completed, as is the invoicing process, on or before the deadline. Lead the customer interface for the project on an everyday basis, proactively addressing any issues before they are escalated. Create functional and technical specification documents. Track open tickets/ incidents in queue and allocate tickets to resources and ensure that the tickets are closed within the deadlines. Ensure analysts adhere to SLAs/KPIs/OLAs. Ensure that all in the delivery team, including self, are constantly thinking of ways to do things faster, better or in a more economic manner. Lead and ensure project is in compliance with Software Quality Processes and within timelines. Review functional and technical specification documents. Serve as the single point of contact for the team to the project stakeholders. Promote team work, motivate, mentor and develop subordinates. Provide application production support as per process/RACI (Responsible, Accountable, Consulted and Informed) Matrix.
Posted 1 day ago
6.0 - 11.0 years
6 - 11 Lacs
Hyderabad
Remote
We are seeking a highly skilled and proactive Senior Database Administrator to join our team. This hybrid role blends traditional DBA responsibilities with modern data engineering tasks to support our Comparative data loads and ensure optimal performance of critical database systems. Youll play a key role in scaling our data infrastructure, diagnosing performance bottlenecks, and contributing to architectural improvements. Key Responsibilities: Data Architecture & Scalability o Redesign and improve JSON blob storage strategies that have reached scalability limits. o Evaluate and optimize complex data structures spanning five core applications. Performance Tuning & Optimization o Analyze and resolve performance issues across PostgreSQL, MSSQL, and Snowflake environments. Refactor inefficient queries and recommend index strategies or schema redesigns. Support and tune Amazon RDS PostgreSQL deployments for high availability and performance. Data Engineering Support o Contribute to the design and maintenance of ETL pipelines that support Comparative data loads. Collaborate with data engineering teams on pipeline orchestration, transformation, and delivery. Database Administration Ensure high availability, backups, and disaster recovery across database platforms. Monitor and manage database health, security, and compliance. GIS/Spatial Data (Helpful) Leverage experience with postGIS and/or spatialite to enhance location based features where applicable. Technical Environment: Databases: PostgreSQL (Amazon RDS), MSSQL, Snowflake Data Engineering & Orchestration: Apache Spark, dbt, Airflow, SSIS Cloud & Infrastructure: AWS (including Infrastructure as Code practices) Preferred Qualifications: 6+ years of experience in database administration and performance tuning. Proficiency with JSON data modeling and optimization. Strong SQL expertise with proven ability to refactor complex queriesHands-on . Experience with Amazon RDS, particularly PostgreSQL. Familiarity with Apache Spark for distributed data processing. Familiarity with dbt for data modeling and transformations. Familiarity with Apache Airflow. Familiarity with MonetDB. Exposure to AWS Infrastructure as Code (IaC) using tools like CloudFormation or Terraform. (Nice-to-have) Experience working with spatial data formats (postGIS/spatialite) Strong collaboration skills and ability to work in a cross-functional, agile environment
Posted 1 day ago
6.0 - 11.0 years
8 - 14 Lacs
Pune
Work from Office
Responsibilities: designing, developing, and maintaining scalable data pipelines using Databricks, PySpark, Spark SQL, and Delta Live Tables. Collaborate with cross-functional teams to understand data requirements and translate them into efficient data models and pipelines. Implement best practices for data engineering, including data quality, and data security. Optimize and troubleshoot complex data workflows to ensure high performance and reliability. Develop and maintain documentation for data engineering processes and solutions. Requirements: Bachelor's or Master's degree. Proven experience as a Data Engineer, with a focus on Databricks, PySpark, Spark SQL, and Delta Live Tables. Strong understanding of data warehousing concepts, ETL processes, and data modelling. Proficiency in programming languages such as Python and SQL. Experience with cloud platforms (e.g., AWS, Azure, GCP) and their data services. Excellent problem-solving skills and the ability to work in a fast-paced environment. Strong leadership and communication skills, with the ability to mentor and guide team members.
Posted 1 day ago
8.0 - 13.0 years
6 - 10 Lacs
Hyderabad
Work from Office
Skill Extensive experience with Google Data Products (Cloud Data Fusion,BigQuery, Dataflow, Dataproc, AI Building Blocks, Looker, Dataprep, etc.). Expertise in Cloud Data Fusion,BigQuery & Dataproc Experience in MDM, Metadata Management, Data Quality and Data Lineage tools. E2E Data Engineering and Lifecycle (including non-functional requirements and operations) management. Experience with SQL and NoSQL modern data stores. E2E Solution Design skills - Prototyping, Usability testing and data visualization literacy. Excellent knowledge of the software development life cycle
Posted 1 day ago
0.0 - 5.0 years
3 - 8 Lacs
Gurugram
Work from Office
Practice Overview Practice: Data and Analytics (DNA) - Analytics Consulting At Oliver Wyman DNA, we partner with clients to solve tough strategic business challenges with the power of analytics, technology, and industry expertise. We drive digital transformation, create customer-focused solutions, and optimize operations for the future. Our goal is to achieve lasting results in collaboration with our clients and stakeholders. We value and offer opportunities for personal and professional growth. Join our entrepreneurial team focused on delivering impact globally. Our Mission and Purpose Mission: Leverage Indias high-quality talent to provide exceptional analytics-driven management consulting services that empower clients globally to achieve their business goals and drive sustainable growth, by working alongside Oliver Wyman consulting teams. Purpose: Our purpose is to bring together a diverse team of highest-quality talent, equipped with innovative analytical tools and techniques to deliver insights that drive meaningful impact for our global client base. We strive to build long-lasting partnerships with clients based on trust, mutual respect, and a commitment to deliver results. We aim to build a dynamic and inclusive organization that attracts and retains the top analytics talent in India and provides opportunities for professional growth and development. Our goal is to provide a sustainable work environment while fostering a culture of innovation and continuous learning for our team members. The Role and Responsibilities We have open positions ranging from Data Engineer to Lead Data Engineer, providing talented and motivated professionals with excellent career and growth opportunities. We seek individuals with relevant prior experience in quantitatively intense areas to join our team. Youll be working with varied and diverse teams to deliver unique and unprecedented solutions across all industries. In the data engineering track, you will be primarily responsible for developing and monitoring high-performance applications that can rapidly deploy latest machine learning frameworks and other advanced analytical techniques at scale. This role requires you to be a proactive learner and quickly pick up new technologies, whenever required. Most of the projects require handling big data, so you will be required to work on related technologies extensively. You will work closely with other team members to support project delivery and ensure client satisfaction. Your responsibilities will include Working alongside Oliver Wyman consulting teams and partners, engaging directly with clients to understand their business challenges Exploring large-scale data and designing, developing, and maintaining data/software pipelines, and ETL processes for internal and external stakeholders Explaining, refining, and developing the necessary architecture to guide stakeholders through the journey of model building Advocating application of best practices in data engineering, code hygiene, and code reviews Leading the development of proprietary data engineering, assets, ML algorithms, and analytical tools on varied projects Creating and maintaining documentation to support stakeholders and runbooks for operational excellence Working with partners and principals to shape proposals that showcase our data engineering and analytics capabilities Travelling to clients locations across the globe, when required, understanding their problems, and delivering appropriate solutions in collaboration with them Keeping up with emerging state-of-the-art data engineering techniques in your domain Your Attributes, Experience & Qualifications Bachelor's or masters degree in a computational or quantitative discipline from a top academic program (Computer Science, Informatics, Data Science, or related) Exposure to building cloud ready applications Exposure to test-driven development and integration Pragmatic and methodical approach to solutions and delivery with a focus on impact Independent worker with ability to manage workload and meet deadlines in a fast-paced environment Collaborative team player Excellent verbal and written communication skills and command of English Willingness to travel Respect for confidentiality Technical Background Prior experience in designing and deploying large-scale technical solutions Fluency in modern programming languages (Python is mandatory; R, SAS desired) Experience with AWS/Azure/Google Cloud, including familiarity with services such as S3, EC2, Lambda, Glue Strong SQL skills and experience with relational databases such as MySQL, PostgreSQL, or Oracle Experience with big data tools like Hadoop, Spark, Kafka Demonstrated knowledge of data structures and algorithms Familiarity with version control systems like GitHub or Bitbucket Familiarity with modern storage and computational frameworks Basic understanding of agile methodologies such as CI/CD, Applicant Resiliency, and Security Valued but not required: Compelling side projects or contributions to the Open-Source community Prior experience with machine learning frameworks (e.g., Scikit-Learn, TensorFlow, Keras/Theano, Torch, Caffe, MxNet) Familiarity with containerization technologies, such as Docker and Kubernetes Experience with UI development using frameworks such as Angular, VUE, or React Experience with NoSQL databases such as MongoDB or Cassandra Experience presenting at data science conferences and connections within the data science community Interest/background in Financial Services in particular, as well as other sectors where Oliver Wyman has a strategic presence Roles and levels We are hiring for engineering role across the levels from Data Engineer to Lead Data Engineer level for experience ranging from 0-8 years..
Posted 1 day ago
5.0 - 10.0 years
8 - 12 Lacs
Lucknow
Work from Office
Job OverviewBranch launched in India in 2019 and has seen rapid adoption and growth. As a company, we are passionate about our customers, fearless in the face of barriers, and driven by data.We are seeking an experienced and strategic Data and Reporting Lead to shape our data strategy and drive data-driven decision-making across the organization. This role will focus on developing a comprehensive data infrastructure, ensuring data accuracy, and providing critical insights to support our business goals.ResponsibilitiesData Strategy & Governance: Develop and implement a data strategy that aligns with organizational goals. Establish governance policies to maintain data quality, consistency, and security.Team Leadership: Provide training and development to enhance the teams skills in data management and reporting.Reporting & Analytics: Oversee the creation of dashboards and reports, delivering key insights to stakeholders. Ensure reports are accessible, reliable, and relevant, with a focus on performance metrics, customer insights, and operational efficiencies.Cross-functional Collaboration: Work closely with cross-functional teams (Tech, Finance, Operations, Marketing, Credit and Analytics) to identify data requirements, integrate data across systems, and support data-driven initiatives.Data Infrastructure & Tools: Work with Data Engineering to assess, select, and implement data tools and platforms to optimize data storage, processing, and reporting capabilities. Maintain and improve our data infrastructure to support scalability and data accessibility.Data Compliance: Ensure adherence to data privacy laws and compliance standards, implementing best practices in data security and privacy.QualificationsExperience: 5-10 years of experience in data management and reporting with at least some in a leadership role.Education: Bachelors or Masters degree in Data Science, Business Analytics, Statistics, Computer Science, or a related field (STEM).Technical Skills: Proficiency in data visualization tools (Metabase, Sisense, Tableau, Power BI), SQL, and data warehousing solutions. Knowledge of ETL processes and familiarity with cloud data platforms is a plus.Analytical Skills: Strong analytical abilities and a strategic mindset, with proven experience in translating data into actionable business insights.Leadership & Communication: Excellent leadership, communication, and presentation skills. Ability to communicate complex information clearly to both technical and non-technical stakeholders.Strong strategic thinking and problem-solving skillsEnthusiasm for working across cultures, functions, and time zones
Posted 1 day ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
20312 Jobs | Dublin
Wipro
11977 Jobs | Bengaluru
EY
8165 Jobs | London
Accenture in India
6667 Jobs | Dublin 2
Uplers
6464 Jobs | Ahmedabad
Amazon
6352 Jobs | Seattle,WA
Oracle
5993 Jobs | Redwood City
IBM
5803 Jobs | Armonk
Capgemini
3897 Jobs | Paris,France
Tata Consultancy Services
3776 Jobs | Thane