Jobs
Interviews

2605 Data Engineering Jobs - Page 26

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

9.0 - 13.0 years

22 - 27 Lacs

Bengaluru

Remote

Who we are At Cimpress Technology, we are dedicated to crafting cutting-edge, world-class software solutions to power our mass customization businesses, serving a vast customer base of over 17 million individuals worldwide Our Mass Customization Platform (MCP) is a flexible ecosystem of modular, multi-tenant services that empowers Cimpress businesses to select tailored solutions that meet their unique needs, fostering rapid innovation in product launches, customer outreach, and fulfillment of customized orders .Our business successfully brings print and mass customization to the 21st century. Our products range from T-shirts to business cards - you name it, we decorate it! In total, we are capable of producing over 80 million unique products. Founded as Vistaprint in 1995, we are constantly growing and will continue to do so in the future. Cimperss Technology organization is working to make our company one of the world's most well-known and successful data-driven companies. The cross-functional team includes analysts, software and data engineers, architects, data scientists, product owners, and more - all passionate about providing Cimpress Technology with solutions, insights and tools to develop products and services customer love. Cimpress Technology team members are empowered to learn new skills, communicate openly and be active problem-solvers. We are about to disrupt the entire industry. Are you ready to join us and contribute? We are now looking for a Lead Analytics Engineer to join us on this journey someone who sits at the intersection of analytics and data engineering, and thrives on transforming complex data into trusted insights through scalable, production-grade pipelines and user-friendly data products. What You’ll Do As a Lead Analytics Engineer, you will: Partner with cross-functional stakeholders (Product, Marketing, Engineering, Data Science, and Operations) to design, develop, and maintain end-to-end data solutions including data models, curated datasets, dashboards, and experiments. Engineer production-grade, scalable data pipelines using tools like dbt, Airflow, Snowflake, and Python, incorporating best practices in version control, CI/CD, and monitoring. Design data products supporting supply chain, fulfillment, and operational analytics — driving insight into inventory, shipping, order workflows, and manufacturing efficiency. Lead the design of semantic layers and data models that empower analysts and self-service BI tools like Looker or Power BI. Collaborate in an agile product team, participate in backlog grooming, and deliver MVPs iteratively to drive early impact. What You’ll Bring 9+ years of experience in analytics, data engineering, or analytics engineering roles, ideally in a fast-paced eCommerce or tech environment. Required experience delivering data solutions in at least one of the following areas: Supply Chain Analytics Product Fulfillment, Order Management, or Manufacturing Analytics Proven experience in: SQL (strong proficiency) At least one programming language: Python (preferred), R, or Scala Data transformation tools like dbt and orchestration tools like Airflow Cloud data platforms (e.g., Snowflake, AWS, Azure, or GCP) Building reliable, performant, and maintainable ETL/ELT pipelines Hands-on experience creating self-service BI dashboards and semantic models using Looker, Tableau, or Power BI. Familiarity with microservice architectures, APIs, and real-time data processing is a plus. Bachelor's degree in Computer Science, Statistics, Mathematics, Engineering, or a related field. Master’s degree preferred. Excellent communication and stakeholder engagement skills. You can translate business problems into data solutions. Why You'll Love Working Here: This is a unique opportunity to lead and shape the future of impactful software solutions, working alongside a diverse and talented team. You'll be at the forefront of innovation, building systems that matter while mentoring and inspiring others to achieve their best. We strive to give you everything you need to learn, grow, and succeed and take a step forward in your learning journey – and your life. Through constant learning, collaboration, and perpetual exposure to what’s next, we’re always pushing boundaries and broadening our horizons. At Cimpress, we put great importance into the wellbeing of our employees, which is why we offer perks that ensure an excellent work/life balance. Led by founder and CEO Robert Keane, Cimpress invests in and helps build customer-focused, entrepreneurial mass customization businesses. Through the personalized physical (and digital) products these companies create,we empower over 17 million global customers to make an impression. Last year, Cimpress generated $2.88B in revenue through customized print products, signage, apparel, packaging and more. The Cimpress family includes a dynamic, international group of businesses and central teams, all working to solve problems, build businesses,innovate and improve. Equal Opportunity Employer: Cimpress Tech, a Cimpress company, is an Equal Employment Opportunity Employer. All qualified candidates will receive consideration for employment without regard to race, color, sex, national or ethnic origin, nationality, age, religion, citizenship, disability, medical condition, sexual orientation, gender identity, gender presentation, legal or preferred name, marital status, pregnancy, family structure, veteran status or any other basis protected by human rights laws or regulations. This list is not exhaustive, and, in fact, we strive to do more than the law requires. Please visit: https://cimpress.com/our-platform/ Meanwhile you can know more about our company details through below mentioned links: Cimpress Vision - https://player.vimeo.com/video/111855876 About us: Our story - http://cimpress.com/about-us/ Global corporate Website – www.cimpress.com Global corporate Website – www.cimpress.com

Posted 2 weeks ago

Apply

6.0 - 10.0 years

15 - 25 Lacs

Mumbai

Work from Office

Who We Are At Kyndryl, we design, build, manage and modernize the mission-critical technology systems that the world depends on every day. So why work at Kyndryl? We are always moving forward – always pushing ourselves to go further in our efforts to build a more equitable, inclusive world for our employees, our customers and our communities. The Role Are you ready to dive headfirst into the captivating world of data engineering at Kyndryl? As a Data Engineer, you'll be the visionary behind our data platforms, crafting them into powerful tools for decision-makers. Your role? Ensuring a treasure trove of pristine, harmonized data is at everyone's fingertips. As a Data Engineer at Kyndryl, you'll be at the forefront of the data revolution, crafting and shaping data platforms that power our organization's success. This role is not just about code and databases; it's about transforming raw data into actionable insights that drive strategic decisions and innovation. An ELK(Elastic, Logstash & Kibana) Data Engineer is responsible for developing, implementing, and maintaining the ELK stack-based solutions within an organization. The engineer plays a crucial role in developing efficient and effective log processing, indexing, and visualization for monitoring, troubleshooting, and analysis purposes. In this role, you'll be engineering the backbone of our data infrastructure, ensuring the availability of pristine, refined data sets. With a well-defined methodology, critical thinking, and a rich blend of domain expertise, consulting finesse, and software engineering prowess, you'll be the mastermind of data transformation. Your journey begins by understanding project objectives and requirements from a business perspective, converting this knowledge into a data puzzle. You'll be delving into the depths of information to uncover quality issues and initial insights, setting the stage for data excellence. But it doesn't stop there. You'll be the architect of data pipelines, using your expertise to cleanse, normalize, and transform raw data into the final dataset—a true data alchemist. Armed with a keen eye for detail, you'll scrutinize data solutions, ensuring they align with business and technical requirements. Your work isn't just a means to an end; it's the foundation upon which data-driven decisions are made – and your lifecycle management expertise will ensure our data remains fresh and impactful. So, if you're a technical enthusiast with a passion for data, we invite you to join us in the exhilarating world of data engineering at Kyndryl. Let's transform data into a compelling story of innovation and growth. Your Future at Kyndryl Every position at Kyndryl offers a way forward to grow your career. We have opportunities that you won’t find anywhere else, including hands-on experience, learning opportunities, and the chance to certify in all four major platforms. Whether you want to broaden your knowledge base or narrow your scope and specialize in a specific sector, you can find your opportunity here. Who You Are Who You Are You’re good at what you do and possess the required experience to prove it. However, equally as important – you have a growth mindset; keen to drive your own personal and professional development. You are customer-focused – someone who prioritizes customer success in their work. And finally, you’re open and borderless – naturally inclusive in how you work with others. Required Skills and Experience BS or MS degree in Computer Science or a related technical field 10+ years overall IT Industry Experience. 5+ years of Python or Java development experience 5+ years of SQL experience (No-SQL experience is a plus) 4+ years of experience with schema design and dimensional data modelling 3+ years of experience with Elastic, Logstash and Kibana Ability in managing and communicating data warehouse plans to internal clients. Experience designing, building, and maintaining data processing systems. Experience working with Machine Learning model is a plus. Knowledge of cloud platforms (e.g., AWS, Azure, GCP) and containerization technologies (e.g., Docker, Kubernetes) is a plus. Elastic Certification is preferrable. Preferred Skills and Experience • Experience working with Machine Learning model is a plus. • Knowledge of cloud platforms (e.g., AWS, Azure, GCP) and containerization technologies (e.g., Docker, Kubernetes) is a plus. • Elastic Certification is preferrable. Being You Diversity is a whole lot more than what we look like or where we come from, it’s how we think and who we are. We welcome people of all cultures, backgrounds, and experiences. But we’re not doing it single-handily: Our Kyndryl Inclusion Networks are only one of many ways we create a workplace where all Kyndryls can find and provide support and advice. This dedication to welcoming everyone into our company means that Kyndryl gives you – and everyone next to you – the ability to bring your whole self to work, individually and collectively, and support the activation of our equitable culture. That’s the Kyndryl Way. What You Can Expect With state-of-the-art resources and Fortune 100 clients, every day is an opportunity to innovate, build new capabilities, new relationships, new processes, and new value. Kyndryl cares about your well-being and prides itself on offering benefits that give you choice, reflect the diversity of our employees and support you and your family through the moments that matter – wherever you are in your life journey. Our employee learning programs give you access to the best learning in the industry to receive certifications, including Microsoft, Google, Amazon, Skillsoft, and many more. Through our company-wide volunteering and giving platform, you can donate, start fundraisers, volunteer, and search over 2 million non-profit organizations. At Kyndryl, we invest heavily in you, we want you to succeed so that together, we will all succeed. Get Referred! If you know someone that works at Kyndryl, when asked ‘How Did You Hear About Us’ during the application process, select ‘Employee Referral’ and enter your contact's Kyndryl email address.

Posted 2 weeks ago

Apply

3.0 - 8.0 years

25 - 35 Lacs

Hyderabad, Gurugram

Hybrid

Job Summary: As a member of the Cognitive Engineering team, you will build and maintain enterprise-scale data extraction, automation, and ML model deployment pipelines. You will design resilient, production-ready systems within an AWS-based ecosystem, collaborating with a hands-on, technically strong global team to solve high-complexity problems end-to-end. Key Responsibilities: Develop, deploy, and operate data extraction and automation pipelines in production. Integrate and deploy machine learning models into pipelines (e.g., inference services, batch scoring). Lead the delivery of complex extraction, transformation, and ML deployment projects. Scale pipelines on AWS (EKS, ECS, Lambda) and manage DataOps processes with Celery, Redis, and Airflow. Implement robust CI/CD pipelines on Azure DevOps and maintain comprehensive test coverage. Strengthen data quality and reliability through logging, metrics, and automated alerts. Partner with data scientists, ML engineers, and product teams to align on requirements and delivery timelines. Technical Requirements: 2-6 years of relevant experience in data engineering, automation, or ML deployment. Expert proficiency in Python, including building extraction libraries and RESTful APIs. Hands-on experience with task queues and orchestration: Celery, Redis, Airflow. Strong AWS expertise: EKS/ECS, Lambda, S3, RDS/DynamoDB. Containerization and orchestration experience: Docker (mandatory), basic Kubernetes (preferred). Proven experience deploying ML models to production. Solid understanding of CI/CD practices and hands-on experience with Azure DevOps. Familiarity with SQL and NoSQL stores (e.g., PostgreSQL, MongoDB)

Posted 2 weeks ago

Apply

7.0 - 12.0 years

30 - 40 Lacs

Hyderabad

Work from Office

Support enhancements to the MDM platform Develop pipelines using snowflake python SQL and airflow Track System Performance Troubleshoot issues Resolve production issues Required Candidate profile 5+ years of hands on expert level Snowflake, Python, orchestration tools like Airflow Good understanding of investment domain Experience with dbt, Cloud experience (AWS, Azure) DevOps

Posted 2 weeks ago

Apply

5.0 - 10.0 years

12 - 20 Lacs

Noida, Gurugram, Delhi / NCR

Hybrid

Responsibilities :- Build and manage data infrastructure on AWS , including S3, Glue, Lambda, Open Search, Athena, and CloudWatch using IaaC tool like Terraform Design and implement scalable ETL pipelines with integrated validation and monitoring. Set up data quality frameworks using tools like Great Expectations , integrated with PostgreSQL or AWS Glue jobs. Implement automated validation checks at key points in the data flow: post-ingest, post-transform, and pre-load. Build centralized logging and alerting pipelines (e.g., using CloudWatch Logs, Fluent bit ,SNS, File bit ,Logstash , or third-party tools). Define CI/CD processes for deploying and testing data pipelines (e.g., using Jenkins, GitHub Actions) Collaborate with developers and data engineers to enforce schema versioning, rollback strategies, and data contract enforcement. Preferred candidate profile 5+ years of experience in DataOps, DevOps, or data infrastructure roles. Proven experience with infrastructure-as-code (e.g., Terraform, CloudFormation). Proven experience with real-time data streaming platforms (e.g., Kinesis, Kafka). Proven experience building production-grade data pipelines and monitoring systems in AWS . Hands-on experience with tools like AWS Glue , S3 , Lambda , Athena , and CloudWatch . Strong knowledge of Python and scripting for automation and orchestration. Familiarity with data validation frameworks such as Great Expectations, Deequ, or dbt tests. Experience with SQL-based data systems (e.g., PostgreSQL). Understanding of security, IAM, and compliance best practices in cloud data environments.

Posted 2 weeks ago

Apply

4.0 - 9.0 years

15 - 25 Lacs

Hyderabad

Work from Office

python, experienced in performing ETL andData Engineering concepts (PySpark, NumPy, Pandas, AWS Glue and Airflow)SQL exclusively Oracle hands-on work experience SQL profilers / Query Analyzers AWS cloud related (S3, RDS, RedShift) ETLPython

Posted 2 weeks ago

Apply

3.0 - 6.0 years

0 - 0 Lacs

Ahmedabad

Work from Office

Position Summary: The Data Engineer is responsible for maintaining and expanding data workflows, exporting clinical data and creating complex reports, designing, uploading and maintaining report objects in the Companys online report platform. Role & responsibilities: Receives business and technical requirements, provides subject-matter expertise, analyzes and implements data engineering techniques Writes and optimizes complex SQL scripts Develops ETL procedures from sources of various data formats to MariaDB/MySQL databases Manages and maintains Databases Works closely with Data Managers on advanced Analytics Develops, sets up and maintains report objects and dashboards in the online BI platform Surveys and recommends site scalability scenarios and upgrades Designs and develops OLAP cubes and reports Prepares documentations and specifications Collaborate with other team members Complies with the Company's Quality and Information Security Management Systems and applicable national and international legislation, including legislation for data protection Education Requirements: Required: BSc in Informatics or Computer Science and Engineering Desired: MSc in Data Science Professional Experience requirements: Required: 3 years working experience in similar disciplines Excellent knowledge of SQL and relational databases (MariaDB) Experience with at least one data processing scripting language: Python, Java, R, Scala, etc. Desired: Experience in an active online BI Platform (e.g., Jasper, Tableau, PowerBI etc.) Experience in NoSQL databases (e.g., MongoDB etc.)

Posted 2 weeks ago

Apply

2.0 - 5.0 years

16 - 18 Lacs

Coimbatore

Work from Office

Overview Overview Annalect is currently seeking a Senior Data Engineer to join our Technology team. In this role you will build Annalect products which sit atop cloud-based data infrastructure. We are looking for people who have a shared passion for technology, design & development, data, and fusing these disciplines together to build cool things. In this role, you will work on one or more software and data products in the Annalect Engineering Team. You will participate in technical architecture, design and development of software products as well as research and evaluation of new technical solutions Responsibilities Designing, building, testing, and deploying data transfers across various cloud environments (Azure, GCP, AWS, Snowflake, etc). Developing data pipelines, monitoring, maintaining, and tuning. Write at-scale data transformations in SQL and Python. Perform code reviews and provide leadership and guidance to junior developers. Qualifications Curiosity in learning the business requirements that are driving the engineering requirements. Interest in new technologies and eagerness to bring those technologies and out of the box ideas to the team. 3+ years of SQL experience. 3+ years of professional Python experience. 3+ years of professional Linux experience. Preferred familiarity with Snowflake, AWS, GCP, Azure cloud environments. Intellectual curiosity and drive; self-starters will thrive in this position. Passion for Technology: Excitement for new technology, bleeding edge applications, and a positive attitude towards solving real world challenges. Additional Skills BS BS, MS or PhD in Computer Science, Engineering, or equivalent real-world experience. Experience with big data and/or infrastructure. Bonus for having experience in setting up Petabytes of data so they can be easily accessed. Understanding of data organization, ie partitioning, clustering, file sizes, file formats. Experience working with classical relational databases (Postgres, Mysql, MSSQL). Experience with Hadoop, Hive, Spark, Redshift, or other data processing tools (Lots of time will be spent building and optimizing transformations) Proven ability to independently execute projects from concept to implementation to launch and to maintain a live product. Perks of working at Annalect We have an incredibly fun, collaborative, and friendly environment, and often host social and learning activities such as game night, speaker series, and so much more! Halloween is a special day on our calendar since it is our Founding Day – we go all out with decorations, costumes, and prizes! Generous vacation policy. Paid time off (PTO) includes vacation days, personal days, and a Summer Friday program. Extended time off around the holiday season. Our office is closed between Xmas and New Year to encourage our hardworking employees to rest, recharge and celebrate the season with family and friends. As part of Omnicom, we have the backing and resources of a global billion-dollar company, but also have the flexibility and pace of a “startup” - we move fast, break things, and innovate. Work with modern stack and environment to keep on learning and improving helping to experiment and shape latest technologies

Posted 2 weeks ago

Apply

4.0 - 9.0 years

8 - 16 Lacs

Kolkata

Remote

Enhance/modify applications, configure existing systems and provide user support DotNET Full Stack or [Angular 18 + developer + DotNET backend] SQL Server Angular version 18+ (it is nice to have) Angular version 15+ (mandatory)

Posted 2 weeks ago

Apply

1.0 - 4.0 years

1 - 3 Lacs

Hyderabad, Chennai, Bengaluru

Work from Office

Cloud Data Engineer Job Title : Cloud Data Engineer Location : Chennai, Hyderabad, Bangalore Experience : 1-4 Job Summary The Cloud Data Engineer designs and builds scalable data pipelines and architectures in cloud environments. This role supports analytics, machine learning, and business intelligence initiatives by ensuring reliable data flow and transformation. Key Responsibilities Develop and maintain ETL/ELT pipelines using cloud-native tools. Design data models and storage solutions optimized for performance and scalability. Integrate data from various sources (APIs, databases, streaming platforms). Ensure data quality, consistency, and security across pipelines. Collaborate with data scientists, analysts, and business teams. Monitor and troubleshoot data workflows and infrastructure. Automate data engineering tasks using scripting and orchestration tools. Required Skills Experience with cloud data platforms (AWS Glue, Azure Data Factory, Google Cloud Dataflow). Proficiency in SQL and programming languages (Python, Scala, Java). Knowledge of big data technologies (Spark, Hadoop, Kafka). Familiarity with data warehousing solutions (Redshift, BigQuery, Snowflake). Understanding of data governance, privacy, and compliance standards. Qualifications Bachelors or Masters degree in Computer Science, Data Engineering, or related field. 3+ years of experience in data engineering, preferably in cloud environments. Certifications in cloud data engineering (e.g., Google Professional Data Engineer, AWS Data Analytics Specialty) are a plus.

Posted 2 weeks ago

Apply

0.0 - 2.0 years

3 - 7 Lacs

Hyderabad

Work from Office

Responsibilities: * Design, develop & maintain full-stack applications using Python, Linux & Deep Learning. * Collaborate with cross-functional teams on data engineering projects.

Posted 2 weeks ago

Apply

2.0 - 7.0 years

5 - 14 Lacs

Bengaluru, Karnataka

Work from Office

Proven knowledge of coding in Python Advanced knowledge of regression & linear optimization (Python based, relevant libraries: pandas, numpy, scikit-learn, or-tools) 2+ years of working experience in data analytics with proven project/solution track record Skills in data analysis & visualization (Python based, relevant libraries: pandas, numpy, plotly) Experience as Operations Research analyst or similar: worked with optimisation models (e.g. Integer and Linear Programming) in the past Qualifications : Bachelors degree or Masters Degree in Data Science, Computational Statistics/Mathematics, Computer Science or related field Self driver attitude: Execute tasks self-responsible. Can-do and get-stuff-done attitude is crucial Good communications skills, able to effectively discuss project requirements with different hierarchy levels and disciplines (engineering, business, etc.) Fluent English Special Instructions: Interview Mode : Virtual Work Mode : Hybrid(Bangalore) Duration of Contract : 12 Months Engagement type: Open for both options C2H (Contract-to-Hire) / One-Time Hire

Posted 2 weeks ago

Apply

2.0 - 6.0 years

4 - 8 Lacs

Hyderabad, Ahmedabad, Gurugram

Work from Office

About the Role: Grade Level (for internal use): 09 The Team As a member of the EDO, Collection Platforms & AI Cognitive Engineering team you will build and maintain enterprisescale data extraction, automation, and ML model deployment pipelines that power data sourcing and information retrieval solutions for S&P Global. You will learn to design resilient, production-ready systems in an AWS-based ecosystem while leading by example in a highly engaging, global environment that encourages thoughtful risk-taking and self-initiative. Whats in it for you: Be part of a global company and deliver solutions at enterprise scale Collaborate with a hands-on, technically strong team (including leadership) Solve high-complexity, high-impact problems end-to-end Build, test, deploy, and maintain production-ready pipelines from ideation through deployment Responsibilities: Develop, deploy, and operate data extraction and automation pipelines in production Integrate and deploy machine learning models into those pipelines (e.g., inference services, batch scoring) Lead critical stages of the data engineering lifecycle, including: End-to-end delivery of complex extraction, transformation, and ML deployment projects Scaling and replicating pipelines on AWS (EKS, ECS, Lambda, S3, RDS) Designing and managing DataOps processes, including Celery/Redis task queues and Airflow orchestration Implementing robust CI/CD pipelines on Azure DevOps (build, test, deployment, rollback) Writing and maintaining comprehensive unit, integration, and end-to-end tests (pytest, coverage) Strengthen data quality, reliability, and observability through logging, metrics, and automated alerts Define and evolve platform standards and best practices for code, testing, and deployment Document architecture, processes, and runbooks to ensure reproducibility and smooth hand-offs Partner closely with data scientists, ML engineers, and product teams to align on requirements, SLAs, and delivery timelines Technical : Expert proficiency in Python, including building extraction libraries and RESTful APIs Hands-on experience with task queues and orchestrationCelery, Redis, Airflow Strong AWS expertiseEKS/ECS, Lambda, S3, RDS/DynamoDB, IAM, CloudWatch Containerization and orchestrationDocker (mandatory), basic Kubernetes (preferred) Proven experience deploying ML models to production (e.g., SageMaker, ECS, Lambda endpoints) Proficient in writing tests (unit, integration, load) and enforcing high coverage Solid understanding of CI/CD practices and hands-on experience with Azure DevOps pipelines Familiarity with SQL and NoSQL stores for extracted data (e.g., PostgreSQL, MongoDB) Strong debugging, performance tuning, and automation skills Openness to evaluate and adopt emerging tools and languages as needed Good to have: Master's or Bachelor's degree in Computer Science, Engineering, or related field 2-6 years of relevant experience in data engineering, automation, or ML deployment Prior contributions on GitHub, technical blogs, or open-source projects Basic familiarity with GenAI model integration (calling LLM or embedding APIs) Whats In It For You Our Purpose: Progress is not a self-starter. It requires a catalyst to be set in motion. Information, imagination, people, technologythe right combination can unlock possibility and change the world.Our world is in transition and getting more complex by the day. We push past expected observations and seek out new levels of understanding so that we can help companies, governments and individuals make an impact on tomorrow. At S&P Global we transform data into Essential Intelligence, pinpointing risks and opening possibilities. We Accelerate Progress. Our People: Our Values: Integrity, Discovery, Partnership At S&P Global, we focus on Powering Global Markets. Throughout our history, the world's leading organizations have relied on us for the Essential Intelligence they need to make confident decisions about the road ahead. We start with a foundation of integrity in all we do, bring a spirit of discovery to our work, and collaborate in close partnership with each other and our customers to achieve shared goals. Benefits: We take care of you, so you cantake care of business. We care about our people. Thats why we provide everything youand your careerneed to thrive at S&P Global. Health & WellnessHealth care coverage designed for the mind and body. Continuous LearningAccess a wealth of resources to grow your career and learn valuable new skills. Invest in Your FutureSecure your financial future through competitive pay, retirement planning, a continuing education program with a company-matched student loan contribution, and financial wellness programs. Family Friendly PerksIts not just about you. S&P Global has perks for your partners and little ones, too, with some best-in class benefits for families. Beyond the BasicsFrom retail discounts to referral incentive awardssmall perks can make a big difference. For more information on benefits by country visithttps://spgbenefits.com/benefit-summaries Global Hiring and Opportunity at S&P Global: At S&P Global, we are committed to fostering a connected andengaged workplace where all individuals have access to opportunities based on their skills, experience, and contributions. Our hiring practices emphasize fairness, transparency, and merit, ensuring that we attract and retain top talent. By valuing different perspectives and promoting a culture of respect and collaboration, we drive innovation and power global markets. Recruitment Fraud Alert If you receive an email from a spglobalind.com domain or any other regionally based domains, it is a scam and should be reported to reportfraud@spglobal.com. S&P Global never requires any candidate to pay money for job applications, interviews, offer letters, pre-employment training or for equipment/delivery of equipment. Stay informed and protect yourself from recruitment fraud by reviewing our guidelines, fraudulent domains, and how to report suspicious activity here. ----------------------------------------------------------- Equal Opportunity Employer S&P Global is an equal opportunity employer and all qualified candidates will receive consideration for employment without regard to race/ethnicity, color, religion, sex, sexual orientation, gender identity, national origin, age, disability, marital status, military veteran status, unemployment status, or any other status protected by law. Only electronic job submissions will be considered for employment. If you need an accommodation during the application process due to a disability, please send an email to EEO.Compliance@spglobal.com and your request will be forwarded to the appropriate person. US Candidates Only The EEO is the Law Poster http://www.dol.gov/ofccp/regs/compliance/posters/pdf/eeopost.pdf describes discrimination protections under federal law. Pay Transparency Nondiscrimination Provision - https://www.dol.gov/sites/dolgov/files/ofccp/pdf/pay-transp_%20English_formattedESQA508c.pdf ----------------------------------------------------------- IFTECH202.1 - Middle Professional Tier I (EEO Job Group)

Posted 2 weeks ago

Apply

5.0 - 10.0 years

15 - 30 Lacs

Hyderabad

Work from Office

Lead Data Engineer Data Management Job description Company Overview Accordion works at the intersection of sponsors and management teams throughout every stage of the investment lifecycle, providing hands-on, execution-focused support to elevate data and analytics capabilities. So, what does it mean to work at Accordion? It means joining 1,000+ analytics, data science, finance & technology experts in a high-growth, agile, and entrepreneurial environment while transforming how portfolio companies drive value. It also means making your mark on Accordions futureby embracing a culture rooted in collaboration and a firm-wide commitment to building something great, together. Headquartered in New York City with 10 offices worldwide, Accordion invites you to join our journey. Data & Analytics (Accordion | Data & Analytics) Accordion's Data & Analytics (D&A) team delivers cutting-edge, intelligent solutions to a global clientele, leveraging a blend of domain knowledge, sophisticated technology tools, and deep analytics capabilities to tackle complex business challenges. We partner with Private Equity clients and their Portfolio Companies across diverse sectors, including Retail, CPG, Healthcare, Media & Entertainment, Technology, and Logistics. D&A team delivers data and analytical solutions designed to streamline reporting capabilities and enhance business insights across vast and complex data sets ranging from Sales, Operations, Marketing, Pricing, Customer Strategies, and more. Location: Hyderabad Role Overview: Accordion is looking for Lead Data Engineer. He/she will be responsible for the design, development, configuration/deployment, and maintenance of the above technology stack. He/she must have in-depth understanding of various tools & technologies in the above domain to design and implement robust and scalable solutions which address client current and future requirements at optimal costs. The Lead Data Engineer should be able to evaluate existing architectures and recommend way to upgrade and improve the performance of architectures both on-premises and cloud-based solutions. A successful Lead Data Engineer should possess strong working business knowledge, familiarity with multiple tools and techniques along with industry standards and best practices in Business Intelligence and Data Warehousing environment. He/she should have strong organizational, critical thinking, and communication skills. What You will do: Partners with clients to understand their business and create comprehensive business requirements. Develops end-to-end Business Intelligence framework based on requirements including recommending appropriate architecture (on-premises or cloud), analytics and reporting. Works closely with the business and technology teams to guide in solution development and implementation. Work closely with the business teams to arrive at methodologies to develop KPIs and Metrics. Work with Project Manager in developing and executing project plans within assigned schedule and timeline. Develop standard reports and functional dashboards based on business requirements. Conduct training programs and knowledge transfer sessions to junior developers when needed. Recommend improvements to provide optimum reporting solutions. Curiosity to learn new tools and technologies to provide futuristic solutions for clients. Ideally, you have: Undergraduate degree (B.E/B.Tech.) from tier-1/tier-2 colleges are preferred. More than 5 years of experience in related field. Proven expertise in SSIS, SSAS and SSRS (MSBI Suite.) In-depth knowledge of databases (SQL Server, MySQL, Oracle etc.) and data warehouse (any one of Azure Synapse, AWS Redshift, Google BigQuery, Snowflake etc.) In-depth knowledge of business intelligence tools (any one of Power BI, Tableau, Qlik, DOMO, Looker etc.) Good understanding of Azure (OR) AWS: Azure (Data Factory & Pipelines, SQL Database & Managed Instances, DevOps, Logic Apps, Analysis Services) or AWS (Glue, Aurora Database, Dynamo Database, Redshift, QuickSight). Proven abilities to take on initiative and be innovative. Analytical mind with problem solving attitude. Why Explore a Career at Accordion: High growth environment: Semi-annual performance management and promotion cycles coupled with a strong meritocratic culture, enables fast track to leadership responsibility. Cross Domain Exposure: Interesting and challenging work streams across industries and domains that always keep you excited, motivated, and on your toes. Entrepreneurial Environment : Intellectual freedom to make decisions and own them. We expect you to spread your wings and assume larger responsibilities. Fun culture and peer group: Non-bureaucratic and fun working environment; Strong peer environment that will challenge you and accelerate your learning curve. Other benefits for full time employees: Health and wellness programs that include employee health insurance covering immediate family members and parents, term life insurance for employees, free health camps for employees, discounted health services (including vision, dental) for employee and family members, free doctors consultations, counsellors, etc. Corporate Meal card options for ease of use and tax benefits. Team lunches, company sponsored team outings and celebrations. Cab reimbursement for women employees beyond a certain time of the day. Robust leave policy to support work-life balance. Specially designed leave structure to support woman employees for maternity and related requests. Reward and recognition platform to celebrate professional and personal milestones. A positive & transparent work environment including various employee engagement and employee benefit initiatives to support personal and professional learning and development.

Posted 2 weeks ago

Apply

5.0 - 10.0 years

16 - 20 Lacs

Pune

Work from Office

Job Title: Senior / Lead Data Engineer Company: Synechron Technologies Locations: Pune or Chennai Experience: 5 to 12 years : Synechron Technologies is seeking an accomplished Senior or Lead Data Engineer with expertise in Java and Big Data technologies. The ideal candidate will have a strong background in Java Spark, with extensive experience working with big data frameworks such as Spark, Hadoop, HBase, Couchbase, and Phoenix. You will lead the design and development of scalable data solutions, ensuring efficient data processing and deployment in a modern technology environment. Key Responsibilities: Lead the development and optimization of large-scale data pipelines using Java and Spark. Design, implement, and maintain data infrastructure leveraging Spark, Hadoop, HBase, Couchbase, and Phoenix. Collaborate with cross-functional teams to gather requirements and develop robust data solutions. Lead deployment automation and management using CI/CD tools including Jenkins, Bitbucket, GIT, Docker, and OpenShift. Ensure the performance, security, and reliability of data processing systems. Provide technical guidance to team members and participate in code reviews. Stay updated on emerging technologies and leverage best practices in data engineering. Qualifications & Skills: 5 to 14 years of experience as a Data Engineer or similar role. Strong expertise in Java programming and Apache Spark. Proven experience with Big Data technologiesSpark, Hadoop, HBase, Couchbase, and Phoenix. Hands-on experience with CI/CD toolsJenkins, Bitbucket, GIT, Docker, OpenShift. Solid understanding of data modeling, ETL workflows, and data architecture. Excellent problem-solving, communication, and leadership skills. S YNECHRONS DIVERSITY & INCLUSION STATEMENT Diversity & Inclusion are fundamental to our culture, and Synechron is proud to be an equal opportunity workplace and is an affirmative action employer. Our Diversity, Equity, and Inclusion (DEI) initiative Same Difference is committed to fostering an inclusive culture promoting equality, diversity and an environment that is respectful to all. We strongly believe that a diverse workforce helps build stronger, successful businesses as a global company. We encourage applicants from across diverse backgrounds, race, ethnicities, religion, age, marital status, gender, sexual orientations, or disabilities to apply. We empower our global workforce by offering flexible workplace arrangements, mentoring, internal mobility, learning and development programs, and more. All employment decisions at Synechron are based on business needs, job requirements and individual qualifications, without regard to the applicants gender, gender identity, sexual orientation, race, ethnicity, disabled or veteran status, or any other characteristic protected by law . Candidate Application Notice

Posted 2 weeks ago

Apply

3.0 - 8.0 years

15 - 25 Lacs

Noida, Pune, Bengaluru

Work from Office

Job Description - Pre Sales Engineer Job Title: Pre-Sales Engineer - Data & AI Solutions (Industry: Manufacturing, Supply Chain, Retail, CPG & Aviation) Department: Pre-Sales / Industry Solutions Job Summary: We are seeking a dynamic and experienced Pre-Sales Engineer with a strong foundation in Data & AI Solutions and a domain focus on Manufacturing, Supply Chain, Retail, CPG and Aviation . You will be responsible for solution positioning, customer engagement and proposal ownership to drive business growth. The ideal candidate will blend industry knowledge, technical expertise and strategic thinking to craft value-driven solutions aligned with client needs. Key Responsibilities: Pre-Sales & Solution Consulting Lead end-to-end pre-sales lifecycle from requirement discovery, proposal development, solutioning and client presentations. Collaborate with cross-functional teams (Sales, Delivery, Solution, Data Science) to craft tailored solutions leveraging AI/ML, Data Engineering, BI, Cloud and Industry 4.0 technologies. Translate client pain points into value propositions, solution blueprints and technical architectures . Industry-Focused Solutioning Create industry-specific use cases and demos aligned with domain challenges in: Manufacturing (OEE, Predictive Maintenance, Quality Analytics, Digital Twin) Supply Chain (Demand Forecasting, Inventory Optimization, Logistics Analytics) Retail & CPG (Assortment Optimization, Customer Insights, Pricing and Promo Effectiveness) Aviation (MRO Optimization, Delay Prediction, Parts Lifecycle Analytics) Client Engagement & Enablement Conduct client workshops, discovery sessions, PoCs , and technical deep dives to shape solution understanding and buy-in. Act as a trusted advisor to clients and internal stakeholders on data and AI maturity journeys . Proposals, RFPs, and Documentation Own and contribute to proposals, SoWs, RFP/RFI responses and collateral creation (solution decks, value maps, case studies). Ensure all deliverables meet quality, timeline, and alignment expectations. Market and Technology Insights Stay ahead of emerging trends in AI, GenAI, IoT, Digital Twin, LLMs, and cloud platforms (Azure, AWS, GCP). Continuously gather competitive intelligence and benchmarking insights to strengthen solution offerings. Qualifications: Bachelors or Masters degree in Engineering, Business, Data Science or a related field. 3 to 5 years of experience in pre-sales, solutioning or consulting roles with focus on Data & AI solutions . Deep understanding of data architectures , AI/ML workflows and business intelligence platforms . Exposure to industry processes, KPIs and digital transformation use cases in Manufacturing, Retail/CPG, and Supply Chain. Proficient in tools such as Power BI/Tableau, Azure/AWS, Python/SQL , and knowledge of ML lifecycle tools (MLflow, Databricks, SageMaker). Preferred Skills: Experience working with SAP, Oracle, Snowflake , and integrating with enterprise systems. Understanding of IoT protocols (MQTT, OPC UA) and smart manufacturing practices . Strong communication and storytelling skills – ability to influence CXOs and business/technical teams . Agile, collaborative, and customer-first mindset. Why Join Us? Be at the forefront of industry innovation in Data & AI . Work across diverse industries and global clients . Opportunity to lead flagship initiatives and cutting-edge solution launches . Empowerment to contribute to IP creation, accelerators, and playbooks . If interested send your updated resume at chaity.mukherjee@celebaltech.com

Posted 2 weeks ago

Apply

4.0 - 6.0 years

18 - 22 Lacs

Noida

Work from Office

Responsibilities : Collaborate with the sales team to understand customer challenges and business objectives and propose solutions, POC etc. Develop and deliver impactful technical presentations and demos showcasing the capabilities of GCP Data and AI , GenAI Solutions. Conduct technical proof of concepts (POCs) to validate the feasibility and value proposition of GCP solutions. Collaborate with technical specialists and solution architects from COE Team to design and configure tailored cloud solutions. Manage and qualify sales opportunities, working closely with the sales team to progress deals through the sales funnel. Stay up to date on the latest GCP offerings, trends, and best practices. Experience : Design and implement a comprehensive strategy for migrating and modernizing existing relational on premise databases to scalable and cost effective solution on Google Cloud Platform ( GCP). Design and Architect the solutions for DWH Modernization and experience with building data pipelines in GCP. Strong Experience in BI reporting tools ( Looker, PowerBI and Tableau). In depth knowledge of Google Cloud Platform (GCP) services, particularly Cloud SQL, Postgres, Alloy DB, BigQuery, Looker Vertex AI and Gemini (GenAI). Strong knowledge and experience in providing the solution to process massive datasets in real time and batch process using cloud native/open source Orchestration techniques. Build and maintain data pipelines using Cloud Dataflow to orchestrate real time and batch data processing for streaming and historical data. Strong knowledge and experience in best practices for data governance, security, and compliance. Excellent Communication and Presentation Skills with ability to tailor technical information as per customer needs. Strong analytical and problem solving skills. Ability to work independently and as part of a team.

Posted 2 weeks ago

Apply

5.0 - 10.0 years

7 - 12 Lacs

Bengaluru

Work from Office

We are seeking a skilled and motivated Data Engineer with hands-on experience in Snowflake , Azure Data Factory (ADF) , and Fivetran . The ideal candidate will be responsible for building and optimizing data pipelines, ensuring efficient data integration and transformation to support analytics and business intelligence initiatives. Key Responsibilities: Design, develop, and maintain robust data pipelines using Fivetran , ADF , and other ETL tools. Build and manage scalable data models and data warehouses on Snowflake . Integrate data from various sources into Snowflake using automated workflows. Implement data transformation and cleansing processes to ensure data quality and integrity. Collaborate with data analysts, data scientists, and business stakeholders to understand data requirements. Monitor pipeline performance, troubleshoot issues, and optimize for efficiency. Maintain documentation related to data architecture, processes, and workflows. Ensure data security and compliance with company policies and industry standards. Required Skills & Qualifications: Bachelor's or Masters degree in Computer Science, Information Systems, or a related field. 3+ years of experience in data engineering or a similar role. Proficiency with Snowflake including architecture, SQL scripting, and performance tuning. Hands-on experience with Azure Data Factory (ADF) for pipeline orchestration and data integration. Experience with Fivetran or similar ELT/ETL automation tools. Strong SQL skills and familiarity with data warehousing best practices. Knowledge of cloud platforms, preferably Microsoft Azure. Familiarity with version control tools (e.g., Git) and CI/CD practices. Excellent communication and problem-solving skills. Preferred Qualifications: Experience with Python, dbt, or other data transformation tools. Understanding of data governance, data quality, and compliance frameworks. Knowledge of additional data tools (e.g., Power BI, Databricks, Kafka) is a plus.

Posted 2 weeks ago

Apply

6.0 - 11.0 years

8 - 13 Lacs

Bengaluru

Work from Office

As a Senior Data Engineer at JLL Technologies, you will: Design, Architect, and Develop solutions leveraging cloud big data technology to ingest, process and analyze large, disparate data sets to exceed business requirements Develop systems that ingest, cleanse and normalize diverse datasets, develop data pipelines from various internal and external sources and build structure for previously unstructured data Interact with internal colleagues and external professionals to determine requirements, anticipate future needs, and identify areas of opportunity to drive data development Develop good understanding of how data will flow & stored through an organization across multiple applications such as CRM, Broker & Sales tools, Finance, HR etc Unify, enrich, and analyze variety of data to derive insights and opportunities Design & develop data management and data persistence solutions for application use cases leveraging relational, non-relational databases and enhancing our data processing capabilities Develop POCs to influence platform architects, product managers and software engineers to validate solution proposals and migrate Develop data lake solution to store structured and unstructured data from internal and external sources and provide technical guidance to help migrate colleagues to modern technology platform Contribute and adhere to CI/CD processes, development best practices and strengthen the discipline in Data Engineering Org Mentor other members in the team and organization and contribute to organizations growth. What we are looking for: 6+ years work experience and bachelors degree in Information Science, Computer Science, Mathematics, Statistics or a quantitative discipline in science, business, or social science. Hands-on engineer who is curious about technology, should be able to quickly adopt to change and one who understands the technologies supporting areas such as Cloud Computing (AWS, Azure(preferred), etc.), Micro Services, Streaming Technologies, Network, Security, etc. 3 or more years of active development experience as a data developer using Python-spark, Spark Streaming, Azure SQL Server, Cosmos DB/Mongo DB, Azure Event Hubs, Azure Data Lake Storage, Azure Search etc. Design & develop data management and data persistence solutions for application use cases leveraging relational, non-relational databases and enhancing our data processing capabilities Build, test and enhance data curation pipelines integration data from wide variety of sources like DBMS, File systems, APIs and streaming systems for various KPIs and metrics development with high data quality and integrity Maintain the health and monitoring of assigned data engineering capabilities that span analytic functions by triaging maintenance issues; ensure high availability of the platform; monitor workload demands; work with Infrastructure Engineering teams to maintain the data platform; serve as an SME of one or more application Team player, Reliable, self-motivated, and self-disciplined individual capable of executing on multiple projects simultaneously within a fast-paced environment working with cross functional teams 3+ years of experience working with source code control systems and Continuous Integration/Continuous Deployment tools Independent and able to manage, prioritize & lead workload What you can expect from us: Our Total Rewards program reflects our commitment to helping you achieve your ambitions in career, recognition, well-being, benefits and pay. Join us to develop your strengths and enjoy a fulfilling career full of varied experiences. Keep those ambitions in sights and imagine where JLL can take you...

Posted 2 weeks ago

Apply

6.0 - 10.0 years

15 - 30 Lacs

Indore, Jaipur, Bengaluru

Work from Office

Exp in dashboard story development, dashboard creation, and data engineering pipelines. Manage and organize large volumes of application log data using Google Big Query Exp with log analytics, user engagement metrics, and product performance metrics Required Candidate profile Exp with tool like Tableau Power BI, or ThoughtSpot AI . Understand log data generated by Python-based applications. Ensure data integrity, consistency, and accessibility for analytical purposes.

Posted 2 weeks ago

Apply

6.0 - 9.0 years

5 - 9 Lacs

Hyderabad

Work from Office

We are looking for a highly skilled Data Engineer with 6 to 9 years of experience to join our team at BlackBaud, located in [location to be specified]. The ideal candidate will have a strong background in data engineering and excellent problem-solving skills. Roles and Responsibility Design, develop, and implement data pipelines and architectures to support business intelligence and analytics. Collaborate with cross-functional teams to identify and prioritize project requirements. Develop and maintain large-scale data systems, ensuring scalability, reliability, and performance. Troubleshoot and resolve complex technical issues related to data engineering projects. Participate in code reviews and contribute to the improvement of the overall code quality. Stay up-to-date with industry trends and emerging technologies in data engineering. Job Requirements Strong understanding of data modeling, database design, and data warehousing concepts. Experience with big data technologies such as Hadoop, Spark, and NoSQL databases. Excellent programming skills in languages like Java, Python, or Scala. Strong analytical and problem-solving skills, with attention to detail and ability to work under pressure. Good communication and collaboration skills, with the ability to work effectively in a team environment. Ability to adapt to changing priorities and deadlines in a fast-paced IT Services & Consulting environment.

Posted 2 weeks ago

Apply

5.0 - 8.0 years

11 - 15 Lacs

Noida

Work from Office

We are looking for a highly skilled and experienced professional to join our team as an Associate Manager in Data Engineering, Data Modelling, or Data Science. The ideal candidate will have a strong background in data analysis and engineering, with excellent problem-solving skills. Roles and Responsibility Design and develop scalable data pipelines and architectures to support business intelligence and analytics. Collaborate with cross-functional teams to identify and prioritize project requirements. Develop and maintain large-scale data models and databases to support business decision-making. Analyze complex data sets to identify trends and patterns, and provide actionable insights. Work closely with stakeholders to understand business needs and develop solutions that meet those needs. Stay up-to-date with industry trends and emerging technologies in data engineering and analytics. Job Requirements Strong understanding of data structures, algorithms, and software design patterns. Experience with big data technologies such as Hadoop, Spark, and NoSQL databases. Excellent problem-solving skills and the ability to analyze complex data sets. Strong communication and collaboration skills, with the ability to work with cross-functional teams. Ability to design and implement scalable data pipelines and architectures. Strong understanding of data modelling concepts and techniques. Educational qualifications: Any Graduate/Postgraduate degree.

Posted 2 weeks ago

Apply

3.0 - 8.0 years

55 - 60 Lacs

Bengaluru

Work from Office

Our vision for the future is based on the idea that transforming financial lives starts by giving our people the freedom to transform their own. We have a flexible work environment, and fluid career paths. We not only encourage but celebrate internal mobility. We also recognize the importance of purpose, well-being, and work-life balance. Within Empower and our communities, we work hard to create a welcoming and inclusive environment, and our associates dedicate thousands of hours to volunteering for causes that matter most to them. Chart your own path and grow your career while helping more customers achieve financial freedom. Empower Yourself. Job Summary: At Empower, a Sr. Architect is a mix of leadership position and thought leadership role. A Sr. Architect works with enterprise architects, and both business and IT teams, to align solutions to the technology vision they help create. This role supports enterprise Architects in the development of technology strategies, reference architectures, solutions, best practices and guidance across the entire IT development organization; all the while addressing total cost of ownership, stability, performance and efficiency. The candidate will also be working with Empower Innovation Lab team as the team is experimenting with emerging technologies, such as Generative AI, and Advanced Analytics. In this rapid paced environment, the person must possess a "can-do" attitude while demonstrating a strong work ethic. This person should have a strong aptitude to help drive decisions. He or she will be actively involved in influencing the strategic direction of technology at Empower Retirement. There will be collaboration across all teams including IT Infrastructure, PMO office, Business, and third-party integrators in reviewing, evaluating, designing and implementing solutions. The Architect must understand available technology options and educate and influence technology teams to leverage them where appropriate. The Architect will recognize and propose alternatives, make recommendations, and describe any necessary trade-offs. In some cases, particularly on key initiatives, the Architect will participate on the design and implementation of end-to-end solutions directly with development teams. The ideal candidate will leverage their technical leadership/direction-setting skills with the development organization to be able to prove technical concepts quickly using a variety of tools, methods, & frameworks. Responsibilities: Help Enterprise Architect, work with peer Sr. Architects and more junior resources to define and execute on the business aligned IT strategy and vision. Develop, document, and provide input into the technology roadmap for Empower. Create reference architectures that demonstrate an understanding of technology components and the relationships between them. Design and modernize complex systems into cloud compatible or cloud native applications where applicable. Create strategies and designs for migrating applications to cloud systems. Participate in the evaluation of new applications, technical options, challenge the status quo, create solid business cases, and influence direction while establishing partnerships with key constituencies. Implement best practices, standards & guidance, then subsequently provide coaching of technology team members. Make leadership recommendations regarding strategic architectural considerations related to process design and process orchestration. Provide strong leadership and direction in development/engineering practices. Collaborate with other business and technology teams on architecture and design issues. Respond to evolving and changing security conditions. Implement and recommend security guidelines. Provide thought-leadership, advocacy, articulation, assurance, and maintenance of the enterprise architecture discipline. Provide solution, guidance, and implementation assistance within full stack development teams. Recommend long term scalable and performant architecture changes keeping cost in control. Preferred Qualifications: 12+ years of experience in the development and delivery of data systems. This experience should be relevant to roles such as Data Analyst, ETL (Extract, Transform and Load) Developer (Data Engineer), Database Administrator (DBA), Business Intelligence Developer (BI Engineer), Machine Learning Developer (ML Engineer), Data Scientist, Data Architect, Data Governance Analyst, or a managerial position overseeing any of these functions. 3+ years of experience creating solution architectures and strategies across multiple architecture domains (business, application, data, integration, infrastructure and security). Solid experience with the following technology disciplines: Python, Cloud architectures, AWS (Amazon Web Services), Bigdata (300+TBs), Advanced Analytics, Advance SQL Skills, Data Warehouse systems(Redshift or Snowflake), Advanced Programming, NoSQL, Distributed Computing, Real-time streaming Nice to have experience in Java, Kubernetes, Argo, Aurora, Google Analytics, META Analytics, Integration with 3rd party APIs, SOA & microservices design, modern integration methods (API gateway/web services, messaging & RESTful architectures). Familiarity with BI tools such as Tableau/QuickSight. Experience with code coverage tools. Working knowledge of addressing architectural cross cutting concerns and their tradeoffs, including topics such as caching, monitoring, operational surround, high availability, security, etc. Demonstrates competency applying architecture frameworks and development methods. Understanding of business process analysis and business process management (BPM). Excellent written and verbal communication skills. Experience in mentoring junior team members through code reviews and recommend adherence to best practices. Experience working with global, distributed teams. Interacts with people constantly, demonstrating strong people skills. Able to motivate and inspire, influencing and evangelizing a set of ideals within the enterprise. Requires a high degree of independence, proactively achieving objectives without direct supervision. Negotiates effectively at the decision-making table to accomplish goals. Evaluates and solves complex and unique problems with strong problem-solving skills. Thinks broadly, avoiding tunnel vision and considering problems from multiple angles. Possesses a general understanding of the wealth management industry, comprehending how technology impacts the business. Stays on top of the latest technologies and trends through continuous learning, including reading, training, and networking with industry colleagues. Data Architecture - Proficiency in platform design and data architecture, ensuring scalable, efficient, and secure data systems that support business objectives. Data Modeling - Expertise in designing data models that accurately represent business processes and facilitate efficient data retrieval and analysis. Cost Management - Ability to manage costs associated with data storage and processing, optimizing resource usage, and ensuring budget adherence. Disaster Recovery Planning - Planning for data disaster recovery to ensure business continuity and data integrity in case of unexpected events. SQL Optimization/Performance Improvements - Advanced skills in optimizing SQL queries for performance, reducing query execution time, and improving overall system efficiency. CICD - Knowledge of continuous integration and continuous deployment processes, ensuring rapid and reliable delivery of data solutions. Data Encryption - Implementing data encryption techniques to protect sensitive information and ensure data privacy and security. Data Obfuscation/Masking - Techniques for data obfuscation and masking to protect sensitive data while maintaining its usability for testing and analysis. Reporting - Experience with static and dynamic reporting to provide comprehensive and up-to-date information to business users. Dashboards and Visualizations - Creating d ashboards and visualizations to present data in an intuitive and accessible manner, facilitating data-driven insights. Generative AI / Machine Learning - Understanding of generative artificial intelligence and machine learning to develop advanced predictive models and automate decision-making processes. Understanding of machine learning algorithms, deep learning frameworks, and AI model architectures. Understanding of ethical AI principles and practices. Experience implementing AI transparency and explainability techniques. Knowledge of popular RAG frameworks and tools (e.g., LangChain, LlamaIndex). Familiarity with fairness metrics and techniques to mitigate bias in AI models. Sample technologies: Cloud Platforms – AWS (preferred) or Azure or Google Cloud Databases - Oracle, Postgres, MySQL(preferred), RDS, DynamoDB(preferred), Snowflake or Redshift(preferred) Data Engineering (ETL, ELT) - Informatica, Talend, Glue, Python(must), Jupyter Streaming – Kafka or Kinesis CICD Pipeline – Jenkins or GitHub or GitLab or ArgoCD Business Intelligence – Quicksight (preferred), Tableau(preferred), Business Objects, MicroStrategy, Qlik, PowerBI, Looker Advanced Analytics - AWS Sagemaker(preferred), TensorFlow, PyTorch, R, scikit learn Monitoring tools – DataDog(preferred) or AppDynamics or Splunk Bigdata technologies – Apache Spark(must), EMR(preferred) Container Management technologies – Kubernetes, EKS(preferred), Docker, Helm Preferred Certifications: AWS Solution Architect AWS Data Engineer AWS Machine Learning Engineer AWS Machine Learning EDUCATION: Bachelor’s and/or master’s degree in computer science or related field (information systems, mathematics, software engineering) . We are an equal opportunity employer with a commitment to diversity. All individuals, regardless of personal characteristics, are encouraged to apply. All qualified applicants will receive consideration for employment without regard to age, race, color, national origin, ancestry, sex, sexual orientation, gender, gender identity, gender expression, marital status, pregnancy, religion, physical or mental disability, military or veteran status, genetic information, or any other status protected by applicable state or local law.

Posted 2 weeks ago

Apply

5.0 - 10.0 years

15 - 30 Lacs

Hyderabad, Chennai, Bengaluru

Work from Office

The Job Design / Implementation of Data Pipelines for processing large volumes of data Ingesting batch and streaming data from various data sources. Writing complex SQL using any RDBMS (Oracle, PostgreSQL, SQL Server etc.) Data Modeling: Proficiency in creating both normalized and denormalized database schemas. Developing applications in Python. Developing ETL, OLAP based and Analytical Applications. Working experience in Azure / AWS services Working in Databricks, Snowflake or other cloud data platforms. Working in Agile / Scrum methodologies Good knowledge of cloud security concepts and implementation of different types of authentication methods working on Azure DevOps; Create and manage GIT and code versioning, build CI/CD pipelines and test plans Your Profile Experience with a strong focus on Data Engineering Design, develop, and maintain data pipelines Implement ETL (Extract, Transform, Load) and ELT (Extract, Load, Transform) processes for seamless data integration. Collaborate with cross-functional teams to design and implement large-scale distributed systems for data processing and analytics. Optimize and maintain CI/CD pipelines to ensure smooth deployment and integration of new data solutions. Exposed to python libraries such as NumPy, pandas, beautiful soup, etc Experience Databricks, Snowflake or other cloud data platforms.

Posted 2 weeks ago

Apply

10.0 - 17.0 years

35 - 55 Lacs

Bengaluru

Work from Office

Responsibilities We are seeking an experienced and dynamic Senior Manager, Data Science to lead our team in delivering innovative data science solutions to our clients. The ideal candidate will possess a technical background in data engineering, data science, or strategic business analytics, coupled with exceptional leadership and project management skills. As a Senior Manager, Data Science, you will be responsible for overseeing the end-to-end delivery of medium to complex projects, managing geographically distributed teams, and engaging with senior client stakeholders to ensure successful project outcomes. Key Responsibilities: Technical: Previous hands-on experience in the Data Science field Demonstrated ability to convert business problems into technical solutions and technical delivery roadmaps. Overall 9+ years of experience, progressively moving from technical roles to delivery management positions. Lead the end-to-end delivery of medium to complex projects in Data Engineering, Data Science Manage geographically distributed teams of 15-20 people, preferably in an agile delivery model. Conduct technical client presentations with support from Subject Matter Experts (SMEs) and Centers of Excellence (CoEs). Engage with senior stakeholders (Director level and above) from the client side to understand requirements and ensure alignment with project objectives. Ability to drive team to maintain high customer NPS on all parameters of- Quality of Deliverables,Timeliness,Technical Rigor, Business Grasp Demonstrate passion for the role and commitment to the company's objectives. Stay updated with the latest technological trends and demonstrate strong technical acumen. Possess excellent written and verbal communication skills. Utilize analytical and creative thinking skills to solve complex problems. Foster a collaborative team environment and exhibit self-driven initiative. Demonstrate strong problem-solving abilities and ability to navigate challenges effectively. Qualifications: Minimum 4 years Bachelor's degree Proven track record of successfully delivering medium to complex projects in Data Science Experience managing geographically distributed teams in an agile delivery model. Excellent leadership and stakeholder management skills, with the ability to engage with senior stakeholders effectively. If you are passionate about leading teams to deliver impactful data analytics solutions, possess strong technical expertise, and thrive in a fast-paced environment, we encourage you to apply for this exciting opportunity.

Posted 2 weeks ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies