Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
7.0 years
0 Lacs
India
On-site
At Global Analytics, we’re driving HEINEKEN’s transformation into the world’s leading data-driven brewer. Our innovative spirit flows through the entire company, promoting a data-first approach in every aspect of our business. From brewery operations and logistics to IoT systems and sustainability monitoring, our smart data products are instrumental in accelerating growth and operational excellence. As we scale our analytics and observability solutions globally, we are seeking a Grafana Developer to join our dynamic Global Analytics team. About the Team: The Global Analytics team at HEINEKEN is a diverse group of Data Scientists, Data Engineers, BI Specialists, and Translators, collaborating across continents. Our culture promotes experimentation, agility, and bold thinking. Together, we transform raw data into impactful decisions that support HEINEKEN’s vision for sustainable, intelligent brewing. Grafana Developer We are looking for a Grafana Developer to build and maintain real-time dashboards that support our IoT monitoring, time series analytics, and operational excellence initiatives.This is a hands-on technical role where you will collaborate with multiple teams to bring visibility to complex data across global operations. If you are excited to: Build real-time dashboards and monitoring solutions using Grafana. Work with InfluxDB, Redshift, and other time-series and SQL-based data sources. Translate complex system metrics into clear visual insights that support global operations. Collaborate with engineers, DevOps, IT Operations, and product teams to bring data to life. Be part of HEINEKEN’s digital transformation journey focused on data and sustainability. And if you like: A hybrid, flexible work environment with access to cutting-edge technologies. Working on impactful projects that monitor and optimize global brewery operations. A non-hierarchical, inclusive, and innovation-driven culture. Opportunities for professional development, global exposure, and knowledge sharing. Your Responsibilities: Design, develop, and maintain Grafana dashboards and visualizations for system monitoring and analytics. Work with time-series data from InfluxDB, Prometheus, Elasticsearch, and relational databases like MySQL, PostgreSQL, and Redshift. Optimize dashboard performance by managing queries, data sources, and caching mechanisms. Configure alerts and notifications to support proactive operational monitoring. Collaborate with cross-functional teams, including DevOps, IT Operations, and Data Analytics, to understand and address their observability needs. Utilize Power BI (optional) to supplement dashboarding with additional reports. Customize and extend Grafana using plugins, scripts, or automation tools as needed. Stay current with industry trends in data visualization, real-time analytics, and Grafana/Power BI ecosystem. We Expect: 4–7 years of experience developing Grafana dashboards and time-series visualizations. Strong SQL/MySQL skills and experience working with multiple data sources. Hands-on experience with Grafana and common data backends such as InfluxDB, Prometheus, PostgreSQL, Elasticsearch, or Redshift. Understanding of time-series data vs. traditional data warehouse architecture. Familiarity with scripting languages (e.g., JavaScript, Python, Golang) and query languages like PromQL. Experience configuring alerts and automating monitoring workflows. Exposure to Power BI (nice-to-have) for report building. Experience with DevOps/IT Ops concepts (monitoring, alerting, and observability tooling). Knowledge of version control (Git) and working in Agile/Scrum environments. Strong problem-solving mindset, clear communication skills, and a proactive attitude. Why Join Us: Be part of a globally recognized brand committed to innovation and sustainability. Join a team that values data transparency, experimentation, and impact. Shape the future of brewing by enabling data-driven visibility across all operations. Work in an international, collaborative environment that encourages learning and growth. If you are passionate about monitoring systems, making time-series data actionable, and enabling real-time decision-making, we invite you to join Global Analytics at HEINEKEN. Your expertise will help shape the future of our digital brewery operations.
Posted 4 days ago
3.0 years
0 Lacs
Mumbai, Maharashtra, India
On-site
Data Engineer We are hiring a Data Enginner for Metro Brands Ltd- Workstyle- Work from Office Work location- Kurla-Mumbai Summary : Responsible for building and maintaining the data pipelines that enable our data analytics and machine learning workflows. Key Responsibilities : Develop, test, and maintain scalable data pipelines for batch and real-time data processing. Implement data extraction, transformation, and loading (ETL) processes. Work with AWS Glue, S3, Athena, Lambda, Airflow, and other data processing frameworks. Optimize data workflows and ensure data quality and consistency. Collaborate with data scientists and analysts to understand data needs and requirements. Required Skills and Qualifications : Bachelor's degree in Computer Science, Data Engineering, or a related field. 3+ years of experience in data engineering. Proficiency in SQL and experience with relational databases. Experience in dbt (data build tool) and cloud data warehouse like snowflake. Experience with AWS services like S3, Glue and Redshift. Strong programming skills in Python with Spark. Familiarity with workflow orchestration tools like Apache Airflow. Experience with cloud data warehousing solutions like Snowflake.
Posted 4 days ago
6.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
We are seeking a skilled Data Engineer with over 6+ years of experience to design, build, and maintain scalable data pipelines and perform advanced data analysis to support business intelligence and data-driven decision-making. The ideal candidate will have a strong foundation in computer science principles, extensive experience with SQL and big data tools, and proficiency in cloud platforms and data visualization tools. Key Responsibilities: Design, develop, and maintain robust, scalable ETL pipelines using Apache Airflow, DBT, Composer, Control-M, Cron, Luigi, and similar tools. Build and optimize data architectures including data lakes and data warehouses. Integrate data from multiple sources ensuring data quality and consistency. Collaborate with data scientists, analysts, and stakeholders to translate business requirements into technical solutions. Analyze complex datasets to identify trends, generate actionable insights, and support decision-making. Develop and maintain dashboards and reports using Tableau, Power BI, and Jupyter Notebooks for visualization and pipeline validation. Manage and optimize relational and NoSQL databases such as MySQL, PostgreSQL, Oracle, MongoDB, and DynamoDB. Work with big data tools and frameworks including Hadoop, Spark, Hive, Kafka, Informatica, Talend, SSIS, and Dataflow. Utilize cloud data services and warehouses like AWS Glue, GCP Dataflow, Azure Data Factory, Snowflake, Redshift, and BigQuery. Support CI/CD pipelines and DevOps workflows using Git, Docker, Terraform, and related tools. Ensure data governance, security, and compliance standards are met. Participate in Agile and DevOps processes to enhance data engineering workflows. Required Qualifications: 6+ years of professional experience in data engineering and data analysis roles. Strong proficiency in SQL and experience with database management systems such as MySQL, PostgreSQL, Oracle, and MongoDB. Hands-on experience with big data tools like Hadoop and Apache Spark. Proficient in Python programming. Experience with data visualization tools such as Tableau, Power BI, and Jupyter Notebooks. Proven ability to design, build, and maintain scalable ETL pipelines using tools like Apache Airflow, DBT, Composer (GCP), Control-M, Cron, and Luigi. Familiarity with data engineering tools including Hive, Kafka, Informatica, Talend, SSIS, and Dataflow. Experience working with cloud data warehouses and services (Snowflake, Redshift, BigQuery, AWS Glue, GCP Dataflow, Azure Data Factory). Understanding of data modeling concepts and data lake/data warehouse architectures. Experience supporting CI/CD practices with Git, Docker, Terraform, and DevOps workflows. Knowledge of both relational and NoSQL databases, including PostgreSQL, BigQuery, MongoDB, and DynamoDB. Exposure to Agile and DevOps methodologies. Experience with Amazon Web Services (S3, Glue, Redshift, Lambda, Athena) Preferred Skills: Strong problem-solving and communication skills. Ability to work independently and collaboratively in a team environment. Experience with service development, REST APIs, and automation testing is a plus. Familiarity with version control systems and workflow automation.
Posted 4 days ago
5.0 years
0 Lacs
Tamil Nadu, India
On-site
We are looking for a skilled and motivated Senior Data Engineer to join data integration and analytics team. The ideal candidate will have hands-on experience with Informatica IICS, AWS Redshift, Python scripting, and Unix/Linux systems. You will be responsible for building and maintaining scalable ETL pipelines to support business intelligence and analytics needs. We value individuals who are passionate about continuous learning, problem-solving, and enabling data-driven decision-making. Years of Experience: Min. 5 years (Note: with 3+ years of hands-on experience in Informatica IICS (Cloud Data Integration, Application Integration) Primary Skills: Informatica IICS AWS (especially Redshift) Secondary Skills: Python Unix/Linux Role Description: As a Senior Data Engineer, you will lead the design, development, and management of scalable data platforms and pipelines. This role demands a strong technical foundation in data architecture, big data technologies, and database systems (both SQL and NoSQL), along with the ability to collaborate across functional teams to deliver robust, secure, and high-performing data solutions. Key Responsibilities: Design, develop, and maintain end-to-end data pipelines and infrastructure. Translate business and functional requirements into scalable, well-documented technical solutions. Build and manage data flows across structured and unstructured data sources, including streaming and batch integrations. Ensure data integrity and quality through automated validations, unit testing, and comprehensive documentation. Optimize data processing performance and manage large datasets efficiently. Collaborate closely with stakeholders and project teams to align data solutions with business objectives. Implement and maintain security and privacy protocols to ensure safe data handling. Set up development environments and configure tools and services. Mentor junior data engineers and contribute to continuous improvement and automation initiatives. Coordinate with QA and UAT teams during testing and release phases. Role Requirements: Strong proficiency in SQL, including procedures, performance tuning, and analytical functions. Solid understanding of data warehousing concepts, including dimensional modeling and slowly changing dimensions (SCDs). Hands-on experience with scripting languages (Shell / PowerShell). Proficiency in data profiling, validation, and testing practices. Excellent problem-solving, communication (written and verbal), and documentation skills. Exposure to Agile methodologies and CI/CD practices. Additional Requirements: Overall 5+ years of experience, with 3+ years of hands-on experience in Informatica IICS (Cloud Data Integration, Application Integration). Strong proficiency in AWS Redshift and writing complex SQL queries. Solid programming experience in Python for scripting, data wrangling, and automation.
Posted 4 days ago
0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
General Skills & Experience: Minimum 10-18 yrs of Experience • Expertise in Spark (Scala/Python), Kafka, and cloud-native big data services (GCP, AWS, Azure) for ETL, batch, and stream processing. • Deep knowledge of cloud platforms (AWS, Azure, GCP), including certification (preferred). • Experience designing and managing advanced data warehousing and lakehouse architectures (e.g., Snowflake, Databricks, Delta Lake, BigQuery, Redshift, Synapse). • Proven experience with building, managing, and optimizing ETL/ELT pipelines and data workflows for large-scale systems. • Strong experience with data lakes, storage formats (Parquet, ORC, Delta, Iceberg), and data movement strategies (cloud and hybrid). • Advanced knowledge of data modeling, SQL development, data partitioning, optimization, and database administration. • Solid understanding and experience with Master Data Management (MDM) solutions and reference data frameworks. • Proficient in implementing Data Lineage, Data Cataloging, and Data Governance solutions (e.g., AWS Glue Data Catalog, Azure Purview). • Familiar with data privacy, data security, compliance regulations (GDPR, CCPA, HIPAA, etc.), and best practices for enterprise data protection. • Experience with data integration tools and technologies (e.g. AWS Glue, GCP Dataflow , Apache Nifi/Airflow, etc.). • Expertise in batch and real-time data processing architectures; familiarity with event-driven, microservices, and message-driven patterns. • Hands-on experience in Data Analytics, BI & visualization tools (PowerBI, Tableau, Looker, Qlik, etc.) and supporting complex reporting use-cases. • Demonstrated capability with data modernization projects: migrations from legacy/on-prem systems to cloud-native architectures. • Experience with data quality frameworks, monitoring, and observability (data validation, metrics, lineage, health checks). • Background in working with structured, semi-structured, unstructured, temporal, and time series data at large scale. • Familiarity with Data Science and ML pipeline integration (DevOps/MLOps, model monitoring, and deployment practices). • Experience defining and managing enterprise metadata strategies.
Posted 4 days ago
8.0 years
0 Lacs
India
Remote
Job Title: ENT DBA Location: Remote Experience: 8+ Years Job Description: We are seeking a skilled and proactive Database Administrator (DBA) with strong SQL Development expertise to manage, optimize, and support our database systems. The ideal candidate will have hands-on experience with cloud-based and on-premises database platforms, with a strong emphasis on AWS RDS, PostgreSQL, Redshift, and SQL Server . A background in developing and optimizing complex SQL queries, stored procedures, and data workflows is essential. Key Responsibilities: Design, implement, and maintain high-performance, scalable, and secure database systems on AWS RDS , PostgreSQL , Redshift , and SQL Server . Develop, review, and optimize complex SQL queries , views, stored procedures, triggers, and functions. Monitor database performance, implement tuning improvements, and ensure high availability and disaster recovery strategies. Collaborate with development and DevOps teams to support application requirements, schema changes, and release cycles. Perform database migrations, upgrades, and patch management. Create and maintain documentation related to database architecture, procedures, and best practices. Implement and maintain data security measures and access controls. Support ETL processes and troubleshoot data pipeline issues as needed. Mandatory Skills & Experience: AWS RDS PostgreSQL Amazon Redshift Microsoft SQL Server Proficiency in SQL development , including performance tuning and query optimization. Experience with backup strategies, replication, monitoring, and high-availability database configurations. Solid understanding of database design principles and best practices. Knowledge of SSIS, SSRS & SSAS development and its management. Knowledge of database partitioning, compression, online performance monitoring/tuning. Experience in database release management process and script review. Knowledge of Database mirroring, AAG and Disaster Recovery procedures. Knowledge in Database monitoring and different monitoring tools. Knowledge in data modeling, database optimization and relational database schemas Knowledge in writing complex SQL queries and debugging through someone else’s code. Experience in managing internal and external MS SQL database security. Knowledge in Database Policies, Certificates, Database Mail, Resource Management Knowledge in SQL Server internals (Memory usage, DMVs, Threads, wait stats, Query Store, SQL Profiler) Knowledge of Cluster Server management and failovers. Knowledge of data modeling (SSARS), Reporting Services (SSRS, Tableau, PowerBI, Athena).
Posted 4 days ago
0 years
0 Lacs
Pune, Maharashtra, India
On-site
Embark on a transformative journey as a Software Engineer-Full Stack Developer at Barclays, where you'll spearhead the evolution of our digital landscape, driving innovation and excellence. You'll harness cutting-edge technology to revolutionize our digital offerings, ensuring unapparelled customer experiences.Operational Support Systems (OSS) Platform Engineering is a new team within the newly formed Network OSS & Tools functional unit in the Network Product domain at Barclays. The Barclays OSS Platform Engineering team is responsible for the design, build and run of the underlying OSS infrastructure and toolchain across cloud and on-prem that the core systems and tools required to run the Barclays Global Network reside on. To be successful in this role as a Software Engineer -Full Stack Developer you should possess the following skillsets: Demonstrable expertise with front-end and back-end skillsets. Java Proficiency and Spring Ecosystem (Spring MVC, Data JPA, Security etc) with strong SQL and NoSQL integration expertise. React.js and javascript expertise : material UI, Ant design and state management expertise (Redus, Zustand or Context API). Strong knowledge of runtime (virtualisation, containers and Kubernetes) and expertise with test driven development using frameworks like cypress, playwright, selenium etc. Strong knowledge of CI/CD pipelines and tooling : Github Actions, Jenkins, Gitlab CI or similar. Monitoring and Observability - logging/tracing and alerting with knowledge of SRE integrations into opensource tooling like grafana/ELK etc. Some Other Highly Valued Skills Include Expertise building ELT pipelines and cloud/storage integrations - data lake/warehouse integrations (redshift, BigQuery, snowflake etc). Expertise with security (OAuth2, CSRF/XSS protection), secure coding practice and Performance Optimization - JVM tuning, performance profiling, caching, lazy loading, rate limiting and high availability in large datasets. Expertise in Public, Private and Hybrid Cloud technologies (DC, AWS, Azure, GCP etc) and across broad Network domains (physical and wireless) - WAN/SD-WAN/LAN/WLAN etc. You may be assessed on the key critical skills relevant for success in role, such as risk and controls, change and transformation, business acumen strategic thinking and digital and technology, as well as job-specific technical skills. This role is based in our Pune office. Purpose of the role To lead and manage engineering teams, providing technical guidance, mentorship, and support to ensure the delivery of high-quality software solutions, driving technical excellence, fostering a culture of innovation, and collaborating with cross-functional teams to align technical decisions with business objectives. Accountabilities Lead engineering teams effectively, fostering a collaborative and high-performance culture to achieve project goals and meet organizational objectives. Oversee timelines, team allocation, risk management and task prioritization to ensure the successful delivery of solutions within scope, time, and budget. Mentor and support team members' professional growth, conduct performance reviews, provide actionable feedback, and identify opportunities for improvement. Evaluation and enhancement of engineering processes, tools, and methodologies to increase efficiency, streamline workflows, and optimize team productivity. Collaboration with business partners, product managers, designers, and other stakeholders to translate business requirements into technical solutions and ensure a cohesive approach to product development. Enforcement of technology standards, facilitate peer reviews, and implement robust testing practices to ensure the delivery of high-quality solutions. Vice President Expectations To contribute or set strategy, drive requirements and make recommendations for change. Plan resources, budgets, and policies; manage and maintain policies/ processes; deliver continuous improvements and escalate breaches of policies/procedures.. If managing a team, they define jobs and responsibilities, planning for the department’s future needs and operations, counselling employees on performance and contributing to employee pay decisions/changes. They may also lead a number of specialists to influence the operations of a department, in alignment with strategic as well as tactical priorities, while balancing short and long term goals and ensuring that budgets and schedules meet corporate requirements.. If the position has leadership responsibilities, People Leaders are expected to demonstrate a clear set of leadership behaviours to create an environment for colleagues to thrive and deliver to a consistently excellent standard. The four LEAD behaviours are: L – Listen and be authentic, E – Energise and inspire, A – Align across the enterprise, D – Develop others.. OR for an individual contributor, they will be a subject matter expert within own discipline and will guide technical direction. They will lead collaborative, multi-year assignments and guide team members through structured assignments, identify the need for the inclusion of other areas of specialisation to complete assignments. They will train, guide and coach less experienced specialists and provide information affecting long term profits, organisational risks and strategic decisions.. Advise key stakeholders, including functional leadership teams and senior management on functional and cross functional areas of impact and alignment. Manage and mitigate risks through assessment, in support of the control and governance agenda. Demonstrate leadership and accountability for managing risk and strengthening controls in relation to the work your team does. Demonstrate comprehensive understanding of the organisation functions to contribute to achieving the goals of the business. Collaborate with other areas of work, for business aligned support areas to keep up to speed with business activity and the business strategies. Create solutions based on sophisticated analytical thought comparing and selecting complex alternatives. In-depth analysis with interpretative thinking will be required to define problems and develop innovative solutions. Adopt and include the outcomes of extensive research in problem solving processes. Seek out, build and maintain trusting relationships and partnerships with internal and external stakeholders in order to accomplish key business objectives, using influencing and negotiating skills to achieve outcomes. All colleagues will be expected to demonstrate the Barclays Values of Respect, Integrity, Service, Excellence and Stewardship – our moral compass, helping us do what we believe is right. They will also be expected to demonstrate the Barclays Mindset – to Empower, Challenge and Drive – the operating manual for how we behave.
Posted 4 days ago
1.0 - 5.0 years
0 Lacs
Bengaluru, Karnataka
On-site
Category Engineering Experience Sr. Associate Primary Address Bangalore, Karnataka Overview Voyager (94001), India, Bangalore, Karnataka Senior Associate - Data Engineer Do you love building and pioneering in the technology space? Do you enjoy solving complex business problems in a fast-paced, collaborative,inclusive, and iterative delivery environment? At Capital One, you'll be part of a big group of makers, breakers, doers and disruptors, who solve real problems and meet real customer needs. We are seeking Data Engineers who are passionate about marrying data with emerging technologies. As a Capital One Data Engineer, you’ll have the opportunity to be on the forefront of driving a major transformation within Capital One. What You’ll Do: Collaborate with and across Agile teams to design, develop, test, implement, and support technical solutions in full-stack development tools and technologies Work with a team of developers with deep experience in machine learning, distributed microservices, and full stack systems Utilize programming languages like Java, Scala, Python and Open Source RDBMS and NoSQL databases and Cloud based data warehousing services such as Redshift and Snowflake Share your passion for staying on top of tech trends, experimenting with and learning new technologies, participating in internal & external technology communities, and mentoring other members of the engineering community Collaborate with digital product managers, and deliver robust cloud-based solutions that drive powerful experiences to help millions of Americans achieve financial empowerment Perform unit tests and conduct reviews with other team members to make sure your code is rigorously designed, elegantly coded, and effectively tuned for performance Basic Qualifications: Bachelor’s Degree At least 1.5 years of experience in application development (Internship experience does not apply) At least 1 year of experience in big data technologies Preferred Qualifications: 3+ years of experience in application development including Python, SQL, Scala, or Java 1+ years of experience with a public cloud (AWS, Microsoft Azure, Google Cloud) 2+ years experience with Distributed data/computing tools (MapReduce, Hadoop, Hive, EMR, Kafka, Spark, Gurobi, or MySQL) 1+ years experience working on real-time data and streaming applications 1+ years of experience with NoSQL implementation (Mongo, Cassandra) 1+ years of data warehousing experience (Redshift or Snowflake) 2+ years of experience with UNIX/Linux including basic commands and shell scripting 1+ years of experience with Agile engineering practices At this time, Capital One will not sponsor a new applicant for employment authorization for this position. No agencies please. Capital One is an equal opportunity employer (EOE, including disability/vet) committed to non-discrimination in compliance with applicable federal, state, and local laws. Capital One promotes a drug-free workplace. Capital One will consider for employment qualified applicants with a criminal history in a manner consistent with the requirements of applicable laws regarding criminal background inquiries, including, to the extent applicable, Article 23-A of the New York Correction Law; San Francisco, California Police Code Article 49, Sections 4901-4920; New York City’s Fair Chance Act; Philadelphia’s Fair Criminal Records Screening Act; and other applicable federal, state, and local laws and regulations regarding criminal background inquiries. If you have visited our website in search of information on employment opportunities or to apply for a position, and you require an accommodation, please contact Capital One Recruiting at 1-800-304-9102 or via email at RecruitingAccommodation@capitalone.com. All information you provide will be kept confidential and will be used only to the extent required to provide needed reasonable accommodations. For technical support or questions about Capital One's recruiting process, please send an email to Careers@capitalone.com Capital One does not provide, endorse nor guarantee and is not liable for third-party products, services, educational tools or other information available through this site. Capital One Financial is made up of several different entities. Please note that any position posted in Canada is for Capital One Canada, any position posted in the United Kingdom is for Capital One Europe and any position posted in the Philippines is for Capital One Philippines Service Corp. (COPSSC). This carousel contains a column of headings. Selecting a heading will change the main content in the carousel that follows. Use the Previous and Next buttons to cycle through all the options, use Enter to select. This carousel shows one item at a time. Use the preceding navigation carousel to select a specific heading to display the content here. How We Hire We take finding great coworkers pretty seriously. Step 1 Apply It only takes a few minutes to complete our application and assessment. Step 2 Screen and Schedule If your application is a good match you’ll hear from one of our recruiters to set up a screening interview. Step 3 Interview(s) Now’s your chance to learn about the job, show us who you are, share why you would be a great addition to the team and determine if Capital One is the place for you. Step 4 Decision The team will discuss — if it’s a good fit for us and you, we’ll make it official! How to Pick the Perfect Career Opportunity Overwhelmed by a tough career choice? Read these tips from Devon Rollins, Senior Director of Cyber Intelligence, to help you accept the right offer with confidence. Your wellbeing is our priority Our benefits and total compensation package is designed for the whole person. Caring for both you and your family. Healthy Body, Healthy Mind You have options and we have the tools to help you decide which health plans best fit your needs. Save Money, Make Money Secure your present, plan for your future and reduce expenses along the way. Time, Family and Advice Options for your time, opportunities for your family, and advice along the way. It’s time to BeWell. Career Journey Here’s how the team fits together. We’re big on growth and knowing who and how coworkers can best support you.
Posted 4 days ago
1.0 years
0 Lacs
Bengaluru, Karnataka
On-site
Category Engineering Experience Principal Associate Primary Address Bangalore, Karnataka Overview Voyager (94001), India, Bangalore, Karnataka Principal Associate - Data Engineer Do you love building and pioneering in the technology space? Do you enjoy solving complex business problems in a fast-paced, collaborative,inclusive, and iterative delivery environment? At Capital One, you'll be part of a big group of makers, breakers, doers and disruptors, who solve real problems and meet real customer needs. We are seeking Data Engineers who are passionate about marrying data with emerging technologies. As a Capital One Data Engineer, you’ll have the opportunity to be on the forefront of driving a major transformation within Capital One. What You’ll Do: Collaborate with and across Agile teams to design, develop, test, implement, and support technical solutions in full-stack development tools and technologies Work with a team of developers with deep experience in machine learning, distributed microservices, and full stack systems Utilize programming languages like Java, Scala, Python and Open Source RDBMS and NoSQL databases and Cloud based data warehousing services such as Redshift and Snowflake Share your passion for staying on top of tech trends, experimenting with and learning new technologies, participating in internal & external technology communities, and mentoring other members of the engineering community Collaborate with digital product managers, and deliver robust cloud-based solutions that drive powerful experiences to help millions of Americans achieve financial empowerment Perform unit tests and conduct reviews with other team members to make sure your code is rigorously designed, elegantly coded, and effectively tuned for performance Basic Qualifications: Bachelor’s Degree At least 3 years of experience in application development (Internship experience does not apply) At least 1 year of experience in big data technologies Preferred Qualifications: 5+ years of experience in application development including Python, SQL, Scala, or Java 2+ years of experience with a public cloud (AWS, Microsoft Azure, Google Cloud) 3+ years experience with Distributed data/computing tools (MapReduce, Hadoop, Hive, EMR, Kafka, Spark, Gurobi, or MySQL) 2+ years experience working on real-time data and streaming applications 2+ years of experience with NoSQL implementation (Mongo, Cassandra) 2+ years of data warehousing experience (Redshift or Snowflake) 3+ years of experience with UNIX/Linux including basic commands and shell scripting 2+ years of experience with Agile engineering practices At this time, Capital One will not sponsor a new applicant for employment authorization for this position. No agencies please. Capital One is an equal opportunity employer (EOE, including disability/vet) committed to non-discrimination in compliance with applicable federal, state, and local laws. Capital One promotes a drug-free workplace. Capital One will consider for employment qualified applicants with a criminal history in a manner consistent with the requirements of applicable laws regarding criminal background inquiries, including, to the extent applicable, Article 23-A of the New York Correction Law; San Francisco, California Police Code Article 49, Sections 4901-4920; New York City’s Fair Chance Act; Philadelphia’s Fair Criminal Records Screening Act; and other applicable federal, state, and local laws and regulations regarding criminal background inquiries. If you have visited our website in search of information on employment opportunities or to apply for a position, and you require an accommodation, please contact Capital One Recruiting at 1-800-304-9102 or via email at RecruitingAccommodation@capitalone.com. All information you provide will be kept confidential and will be used only to the extent required to provide needed reasonable accommodations. For technical support or questions about Capital One's recruiting process, please send an email to Careers@capitalone.com Capital One does not provide, endorse nor guarantee and is not liable for third-party products, services, educational tools or other information available through this site. Capital One Financial is made up of several different entities. Please note that any position posted in Canada is for Capital One Canada, any position posted in the United Kingdom is for Capital One Europe and any position posted in the Philippines is for Capital One Philippines Service Corp. (COPSSC). This carousel contains a column of headings. Selecting a heading will change the main content in the carousel that follows. Use the Previous and Next buttons to cycle through all the options, use Enter to select. This carousel shows one item at a time. Use the preceding navigation carousel to select a specific heading to display the content here. How We Hire We take finding great coworkers pretty seriously. Step 1 Apply It only takes a few minutes to complete our application and assessment. Step 2 Screen and Schedule If your application is a good match you’ll hear from one of our recruiters to set up a screening interview. Step 3 Interview(s) Now’s your chance to learn about the job, show us who you are, share why you would be a great addition to the team and determine if Capital One is the place for you. Step 4 Decision The team will discuss — if it’s a good fit for us and you, we’ll make it official! How to Pick the Perfect Career Opportunity Overwhelmed by a tough career choice? Read these tips from Devon Rollins, Senior Director of Cyber Intelligence, to help you accept the right offer with confidence. Your wellbeing is our priority Our benefits and total compensation package is designed for the whole person. Caring for both you and your family. Healthy Body, Healthy Mind You have options and we have the tools to help you decide which health plans best fit your needs. Save Money, Make Money Secure your present, plan for your future and reduce expenses along the way. Time, Family and Advice Options for your time, opportunities for your family, and advice along the way. It’s time to BeWell. Career Journey Here’s how the team fits together. We’re big on growth and knowing who and how coworkers can best support you.
Posted 4 days ago
0.0 years
0 Lacs
Hyderabad, Telangana
On-site
BI Specialist II Hyderabad, India; Ahmedabad, India Information Technology 312656 Job Description About The Role: Grade Level (for internal use): 09 The Team: Are you ready to dive into the world of data and uncover insights that shape global commodity markets? We're looking for a passionate BI Developer to join our Business Intelligence team within the Commodity Insights division at S&P Global. At S&P Global, we are on a mission to harness the power of data to unlock insights that propel our business forward. We believe in innovation, collaboration, and the relentless pursuit of excellence. Join our dynamic team and be a part of a culture that celebrates creativity and encourages you to push the boundaries of what’s possible. Key Responsibilities Unlocking the Power of Data Collaborate on the end-to-end data journey, helping collect, cleanse, and transform diverse data sources into actionable insights that shape business strategies for functional leaders. Work alongside senior BI professionals to build powerful ETL processes, ensuring data quality, consistency, and accessibility. Crafting Visual Storytelling Develop eye-catching, impactful dashboards and reports that tell the story of commodity trends, prices, and global market dynamics. Bring data to life for stakeholders across the company, including executive teams, analysts, and developers, by helping to create visually compelling and interactive reporting tools. Mentor and train users on dashboard usage for efficient utilization of insights. Becoming a Data Detective Dive deep into commodities data to uncover trends, patterns, and hidden insights that influence critical decisions in real-time. Demonstrate strong analytical skills to swiftly grasp business needs and translate them into actionable insights. Collaborate with stakeholders to define key metrics and KPIs and contribute to data-driven decisions that impact the organization’s direction. Engaging with Strategic Minds Work together with cross-functional teams within business operations to turn complex business challenges into innovative data solutions. Gather, refine, and translate business requirements into insightful reports and dashboards that push our BI team to new heights. Provide ongoing support to cross-functional teams, addressing issues and adapting to changing business processes. Basic Qualifications : 3+ years of professional experience in BI projects, focusing on dashboard development using Power BI or similar tools and deploying them on their respective online platforms for easy access. Proficiency in working with various databases such as Redshift, Oracle, and Databricks, using SQL for data manipulation, and implementing ETL processes for BI dashboards . Ability to identify meaningful patterns and trends in data to provide valuable insights for business decision-making. Skilled in requirement gathering and developing BI solutions. Candidates with a strong background/proficiency in Power BI and Power Platforms tools such as Power Automate/Apps, and intermediate to advanced proficiency in Python are preferred. Essential understanding of data modeling techniques tailored to problem statements. Familiarity with cloud platforms (e.g., Azure, AWS) and data warehousing. Exposure to GenAI concepts and tools such as ChatGPT. Experience with to Agile Project Implementation methods. Excellent written and verbal communication skills. Must be able to self-start and succeed in a fast-paced environment. Additional/Preferred Qualifications : Knowledge of Generative AI, Microsoft Copilot, and Microsoft Fabric a plus. Ability to write complex SQL queries or enhance the performance of existing ETL pipelines is a must. Familiarity with Azure Devops will be an added advantage. Candidates with a strong background/proficiency in Power BI and Power Platforms tools such as Power Automate/Apps, and intermediate to advanced proficiency in Python are preferred. Shift Timings:- 1PM-10PM IST (Flexibility Required) About S&P Global Commodity Insights At S&P Global Commodity Insights, our complete view of global energy and commodities markets enables our customers to make decisions with conviction and create long-term, sustainable value. We’re a trusted connector that brings together thought leaders, market participants, governments, and regulators to co-create solutions that lead to progress. Vital to navigating Energy Transition, S&P Global Commodity Insights’ coverage includes oil and gas, power, chemicals, metals, agriculture and shipping. S&P Global Commodity Insights is a division of S&P Global (NYSE: SPGI). S&P Global is the world’s foremost provider of credit ratings, benchmarks, analytics and workflow solutions in the global capital, commodity and automotive markets. With every one of our offerings, we help many of the world’s leading organizations navigate the economic landscape so they can plan for tomorrow, today. For more information, visit http://www.spglobal.com/commodity-insights. What’s In It For You? Our Purpose: Progress is not a self-starter. It requires a catalyst to be set in motion. Information, imagination, people, technology–the right combination can unlock possibility and change the world. Our world is in transition and getting more complex by the day. We push past expected observations and seek out new levels of understanding so that we can help companies, governments and individuals make an impact on tomorrow. At S&P Global we transform data into Essential Intelligence®, pinpointing risks and opening possibilities. We Accelerate Progress. Our People: We're more than 35,000 strong worldwide—so we're able to understand nuances while having a broad perspective. Our team is driven by curiosity and a shared belief that Essential Intelligence can help build a more prosperous future for us all. From finding new ways to measure sustainability to analyzing energy transition across the supply chain to building workflow solutions that make it easy to tap into insight and apply it. We are changing the way people see things and empowering them to make an impact on the world we live in. We’re committed to a more equitable future and to helping our customers find new, sustainable ways of doing business. We’re constantly seeking new solutions that have progress in mind. Join us and help create the critical insights that truly make a difference. Our Values: Integrity, Discovery, Partnership At S&P Global, we focus on Powering Global Markets. Throughout our history, the world's leading organizations have relied on us for the Essential Intelligence they need to make confident decisions about the road ahead. We start with a foundation of integrity in all we do, bring a spirit of discovery to our work, and collaborate in close partnership with each other and our customers to achieve shared goals. Benefits: We take care of you, so you can take care of business. We care about our people. That’s why we provide everything you—and your career—need to thrive at S&P Global. Our benefits include: Health & Wellness: Health care coverage designed for the mind and body. Flexible Downtime: Generous time off helps keep you energized for your time on. Continuous Learning: Access a wealth of resources to grow your career and learn valuable new skills. Invest in Your Future: Secure your financial future through competitive pay, retirement planning, a continuing education program with a company-matched student loan contribution, and financial wellness programs. Family Friendly Perks: It’s not just about you. S&P Global has perks for your partners and little ones, too, with some best-in class benefits for families. Beyond the Basics: From retail discounts to referral incentive awards—small perks can make a big difference. For more information on benefits by country visit: https://spgbenefits.com/benefit-summaries Global Hiring and Opportunity at S&P Global: At S&P Global, we are committed to fostering a connected and engaged workplace where all individuals have access to opportunities based on their skills, experience, and contributions. Our hiring practices emphasize fairness, transparency, and merit, ensuring that we attract and retain top talent. By valuing different perspectives and promoting a culture of respect and collaboration, we drive innovation and power global markets. Recruitment Fraud Alert: If you receive an email from a spglobalind.com domain or any other regionally based domains, it is a scam and should be reported to reportfraud@spglobal.com. S&P Global never requires any candidate to pay money for job applications, interviews, offer letters, “pre-employment training” or for equipment/delivery of equipment. Stay informed and protect yourself from recruitment fraud by reviewing our guidelines, fraudulent domains, and how to report suspicious activity here. - Equal Opportunity Employer S&P Global is an equal opportunity employer and all qualified candidates will receive consideration for employment without regard to race/ethnicity, color, religion, sex, sexual orientation, gender identity, national origin, age, disability, marital status, military veteran status, unemployment status, or any other status protected by law. Only electronic job submissions will be considered for employment. If you need an accommodation during the application process due to a disability, please send an email to: EEO.Compliance@spglobal.com and your request will be forwarded to the appropriate person. US Candidates Only: The EEO is the Law Poster http://www.dol.gov/ofccp/regs/compliance/posters/pdf/eeopost.pdf describes discrimination protections under federal law. Pay Transparency Nondiscrimination Provision - https://www.dol.gov/sites/dolgov/files/ofccp/pdf/pay-transp_%20English_formattedESQA508c.pdf - 20 - Professional (EEO-2 Job Categories-United States of America), IFTECH202.1 - Middle Professional Tier I (EEO Job Group), SWP Priority – Ratings - (Strategic Workforce Planning) Job ID: 312656 Posted On: 2025-06-26 Location: Hyderabad, Telangana, India
Posted 4 days ago
1.0 years
0 Lacs
Bengaluru, Karnataka
On-site
Voyager (94001), India, Bangalore, Karnataka Principal Associate - Data Engineer Do you love building and pioneering in the technology space? Do you enjoy solving complex business problems in a fast-paced, collaborative, inclusive, and iterative delivery environment? At Capital One, you'll be part of a big group of makers, breakers, doers and disruptors, who solve real problems and meet real customer needs. We are seeking Data Engineers who are passionate about marrying data with emerging technologies. As a Capital One Data Engineer, you’ll have the opportunity to be on the forefront of driving a major transformation within Capital One. What You’ll Do: Collaborate with and across Agile teams to design, develop, test, implement, and support technical solutions in full-stack development tools and technologies Work with a team of developers with deep experience in machine learning, distributed microservices, and full stack systems Utilize programming languages like Java, Scala, Python and Open Source RDBMS and NoSQL databases and Cloud based data warehousing services such as Redshift and Snowflake Share your passion for staying on top of tech trends, experimenting with and learning new technologies, participating in internal & external technology communities, and mentoring other members of the engineering community Collaborate with digital product managers, and deliver robust cloud-based solutions that drive powerful experiences to help millions of Americans achieve financial empowerment Perform unit tests and conduct reviews with other team members to make sure your code is rigorously designed, elegantly coded, and effectively tuned for performance Basic Qualifications: Bachelor’s Degree At least 3 years of experience in application development (Internship experience does not apply) At least 1 year of experience in big data technologies Preferred Qualifications: 5+ years of experience in application development including Python, SQL, Scala, or Java 2+ years of experience with a public cloud (AWS, Microsoft Azure, Google Cloud) 3+ years experience with Distributed data/computing tools (MapReduce, Hadoop, Hive, EMR, Kafka, Spark, Gurobi, or MySQL) 2+ years experience working on real-time data and streaming applications 2+ years of experience with NoSQL implementation (Mongo, Cassandra) 2+ years of data warehousing experience (Redshift or Snowflake) 3+ years of experience with UNIX/Linux including basic commands and shell scripting 2+ years of experience with Agile engineering practices At this time, Capital One will not sponsor a new applicant for employment authorization for this position. No agencies please. Capital One is an equal opportunity employer (EOE, including disability/vet) committed to non-discrimination in compliance with applicable federal, state, and local laws. Capital One promotes a drug-free workplace. Capital One will consider for employment qualified applicants with a criminal history in a manner consistent with the requirements of applicable laws regarding criminal background inquiries, including, to the extent applicable, Article 23-A of the New York Correction Law; San Francisco, California Police Code Article 49, Sections 4901-4920; New York City’s Fair Chance Act; Philadelphia’s Fair Criminal Records Screening Act; and other applicable federal, state, and local laws and regulations regarding criminal background inquiries. If you have visited our website in search of information on employment opportunities or to apply for a position, and you require an accommodation, please contact Capital One Recruiting at 1-800-304-9102 or via email at RecruitingAccommodation@capitalone.com. All information you provide will be kept confidential and will be used only to the extent required to provide needed reasonable accommodations. For technical support or questions about Capital One's recruiting process, please send an email to Careers@capitalone.com Capital One does not provide, endorse nor guarantee and is not liable for third-party products, services, educational tools or other information available through this site. Capital One Financial is made up of several different entities. Please note that any position posted in Canada is for Capital One Canada, any position posted in the United Kingdom is for Capital One Europe and any position posted in the Philippines is for Capital One Philippines Service Corp. (COPSSC).
Posted 4 days ago
4.0 - 5.0 years
0 Lacs
Indore, Madhya Pradesh, India
On-site
Job Overview Seeking an experienced Data Engineer who had 4 to 5 years experience to design, build, and maintain scalable data infrastructure and pipelines. You'll work with cross-functional teams to ensure reliable data flow from various sources to analytics platforms, enabling data-driven decision making across the organization. Key Responsibilities Data Pipeline Development : Design and implement robust ETL/ELT pipelines using tools like Apache Airflow, Spark, or cloud-native solutions. Build real-time and batch processing systems to handle high-volume data streams. Optimize data workflows for performance, reliability, and cost-effectiveness. Infrastructure & Architecture Develop and maintain data warehouses and data lakes using platforms like Snowflake, Redshift, BigQuery, or Databricks. Implement data modeling best practices including dimensional modeling and schema design. Architect scalable solutions on cloud platforms (AWS, GCP, Azure). Data Quality & Governance Implement data quality checks, monitoring, and alerting systems. Establish data lineage tracking and metadata management. Ensure compliance with data privacy regulations and security standards. Collaboration & Support Partner with data scientists, analysts, and business stakeholders to understand requirements. Provide technical guidance on data architecture decisions. Mentor junior engineers and contribute to team knowledge sharing. Required Qualifications Technical Skills : 4-5 years of experience in data engineering or related field. Proficiency in Python, SQL, and at least one other programming language (Java, Scala, Go). Strong experience with big data technologies (Spark, Kafka, Hadoop ecosystem). Hands-on experience with cloud platforms and their data services. Knowledge of containerization (Docker, Kubernetes) and infrastructure as code. (ref:hirist.tech)
Posted 4 days ago
2.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Description * 12 Month, Fixed Term Contract* As a Business Intelligence Engineer, you will be deciphering our customers' ever-evolving needs and shaping solutions that elevate their experience with Amazon. A Successful Candidate Will Possess Strong analytical and problem-solving skills, leveraging data and metrics to inform strategic decisions. Impeccable attention to detail and a clear and compelling communication skills, capable of articulating data insights to diverse stakeholders. Key job responsibilities Deliver on all analytics requirements across business areas. Own the design, development, and maintenance of ongoing metrics, reports, analyses, dashboards, data pipelines etc. to drive key business decisions. Ensure data accuracy by validating data for new and existing tools. Perform both ad-hoc and strategic data analyses Build various data visualizations to tell the story and let the data speak for itself. Recognize and adopt best practices in reporting and analysis: data integrity, test design, analysis, validation, and documentation. Build automation to reduce dependencies on manual data pulls etc. A day in the life A day in the life of Business Intelligence Engineer will include working closely with Product Managers and Software Developers. Working on building dashboards, performing root cause analysis and sharing actionable insights with the stakeholders to enable data-informed decision making. Lead reporting and analytics initiatives to drive data-informed decision making. Design, develop, and maintain ETL processes and data visualization dashboards using Amazon QuickSight. Transform complex business requirements into actionable analytics solutions. Basic Qualifications 2+ years of analyzing and interpreting data with Redshift, Oracle, NoSQL etc. experience Experience with data visualization using Tableau, Quicksight, or similar tools Experience with one or more industry analytics visualization tools (e.g. Excel, Tableau, QuickSight, MicroStrategy, PowerBI) and statistical methods (e.g. t-test, Chi-squared) Experience with scripting language (e.g., Python, Java, or R) Preferred Qualifications Master's degree, or Advanced technical degree Knowledge of data modeling and data pipeline design Experience with statistical analysis, co-relation analysis Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner. Company - ADCI MAA 15 SEZ Job ID: A3018486
Posted 4 days ago
8.0 years
0 Lacs
India
Remote
Position: Senior AWS Data Engineer Location: Remote Salary: Open Work Timings: 2:30 PM to 11:30 PM IST Need someone who can join immediately or in 15 days Responsibilities: Design, develop, and deploy end-to-end data pipelines on AWS cloud infrastructure using services such as Amazon S3, AWS Glue, AWS Lambda, Amazon Redshift, etc. Implement data processing and transformation workflows using Apache Spark, and SQL to support analytics and reporting requirements. Build and maintain orchestration workflows to automate data pipeline execution, scheduling, and monitoring. Collaborate with analysts, and business stakeholders to understand data requirements and deliver scalable data solutions. Optimize data pipelines for performance, reliability, and cost-effectiveness, leveraging AWS best practices and cloud-native technologies. Qualification: Minimum 8+ years of experience building and deploying large-scale data processing pipelines in a production environment. Hands-on experience in designing and building data pipelines on AWS cloud infrastructure. Strong proficiency in AWS services such as Amazon S3, AWS Glue, AWS Lambda, Amazon Redshift, etc. Strong experience with Apache Spark for data processing and analytics. Hands-on experience on orchestrating and scheduling data pipelines using AppFlow, Event Bridge and Lambda. Solid understanding of data modeling, database design principles, and SQL and Spark SQL.
Posted 4 days ago
6.0 - 8.0 years
8 - 12 Lacs
Mumbai, Bengaluru, Delhi / NCR
Work from Office
Skills Required Experience in designing and building a serverless data lake solution using a layered components architecture including Ingestion, Storage, processing, Security & Governance, Data cataloguing & Search, Consumption layer. Hands on experience in AWS serverless technologies such as Lake formation, Glue, Glue Python, Glue Workflows, Step Functions, S3, Redshift, Quick sight, Athena, AWS Lambda, Kinesis. Must have experience in Glue. Experience in design, build, orchestration and deploy multi-step data processing pipelines using Python and Java Experience in managing source data access security, configuring authentication and authorisation, enforcing data policies and standards. Experience in AWS Environment setup and configuration. Minimum 6 years of relevant experience with atleast 3 years in building solutions using AWS Ability to work under pressure and commitment to meet customer expectations Location-Delhi NCR,Bangalore,Chennai,Pune,Kolkata,Ahmedabad,Mumbai,Hyderabad
Posted 4 days ago
10.0 years
0 Lacs
India
Remote
Join phData, a dynamic and innovative leader in the modern data stack. We partner with major cloud data platforms like Snowflake, AWS, Azure, GCP, Fivetran, Pinecone, Glean and dbt to deliver cutting-edge services and solutions. We're committed to helping global enterprises overcome their toughest data challenges. phData is a remote-first global company with employees based in the United States, Latin America and India. We celebrate the culture of each of our team members and foster a community of technological curiosity, ownership and trust. Even though we're growing extremely fast, we maintain a casual, exciting work environment. We hire top performers and allow you the autonomy to deliver results. 5x Snowflake Partner of the Year (2020, 2021, 2022, 2023, 2024) Fivetran, dbt, Atlation, Matillion Partner of the Year #1 Partner in Snowflake Advanced Certifications 600+ Expert Cloud Certifications (Sigma, AWS, Azure, Dataiku, etc) Recognized as an award-winning workplace in US, India and LATAM About You: You are a strategic thinker with a passion for building scalable, cloud-native solutions. With a strong technical background, you excel in driving data platform implementations and managing complex projects. Your expertise in platforms like Snowflake, AWS, and Azure is complemented by a deep understanding of how to align data infrastructure with strategic business goals. You have a proven track record in professional consulting, building scalable, secure solutions that optimize data platform performance. You thrive in environments that challenge you to think critically, lead technical teams, and optimize systems for continuous improvement. As a part of our Managed Services team, you are driven by a commitment to customer success and long-term growth. You embrace best practices, champion effective platform management, and contribute to the evolution of data platforms by promoting phData’s Operational Maturity Framework. Your approach is hands-on, collaborative, and results-oriented, making you a key player in ensuring clients' ongoing success. Key Responsibilities: Technical Leadership: Lead the design, architecture, and implementation of large-scale data platform projects on Snowflake, AWS, and Azure. Guide technical teams through data migration, integration, and performance optimization. Platform Optimization, Integration and Automation: Identify opportunities for automation and performance optimization to enhance client’s data platform capabilities. Lead data migration to cloud platforms (e.g., Snowflake, Redshift), ensuring smooth integration of data lakes, data warehouses, and distributed systems. Platform Security: Setting up a client's data platform with industry best practices & robust security standards including Data Governance. Process Engineering: Adapt to ongoing changes in platform environment, data, and technology by debugging, enhancing features, and re-engineering as needed. Plan ahead for changes to upstream data sources to minimize impact on users and systems, ensuring scalable adoption of modern data platforms. Consulting Leadership: Manage multiple customer engagements to ensure timely project delivery, optimize operations to prevent resource overruns, and maximize platform ROI. Serve as a trusted advisor, offering strategic guidance on data platform optimization and addressing technical challenges to meet business goals. Cross-functional Collaboration & Mentorship: Partner with sales, engineering, and support teams to ensure seamless project delivery and high customer satisfaction. Provide mentorship and technical guidance to junior engineers, promoting a culture of continuous learning and excellence. Key Skills and Experience: Required: Solutions Architecture: 10 years of hands-on experience of architecting, designing, implementing, and managing cloud-native data platforms and solutions. Client-Facing Skills: Strong communication skills, with experience presenting to executive stakeholders and creating detailed solution documentation. Data Platforms: Extensive experience managing enterprise data platforms (e.g., Snowflake, Redshift, Azure Data Warehouse) with strong skills in performance tuning and troubleshooting. Cloud Expertise: Deep knowledge of AWS, Azure, and Snowflake ecosystems, including services like S3, ADLS, Kinesis, Data Factory, DBT, and Kafka. IT Operations: Build and manage scalable, secure data platforms aligned with strategic goals. Excel at optimizing systems and driving continuous improvement in platform performance. SQL Mastery: Advanced proficiency in Microsoft SQL, including writing, debugging, and optimizing queries. DevOps and Infrastructure: Proficiency in infrastructure-as-code (IaC) tools like Terraform or CloudFormation, and experience with CI/CD pipelines (e.g., Bitbucket, GitHub). Data Integration Tools: Expertise with tools such as AWS Data Migration Services, Azure Data Factory, Matillion, Fivetran, or Spark. Preferred: Certifications: Snowflake SnowPro Core certification or equivalent. Python Proficiency: Experience using Python for task automation and operational efficiency. CI/CD Expertise: Hands-on experience with automated deployment frameworks, such as Flyway or Liquibase. Education: A Bachelor’s degree in Computer Science, Engineering, or a related field is highly preferred Advanced degrees or equivalent certifications are preferred. Why phData? We offer: Remote-First Workplace Medical Insurance for Self & Family Medical Insurance for Parents Term Life & Personal Accident Wellness Allowance Broadband Reimbursement 2-4 week bootcamp and provide continuous learning opportunities to enhance your skills and expertise Other perks include paid certifications, professional development allowance and additional compensation for creating company-approved content (dashboards, blogs, videos, whitepapers, etc.) phData celebrates diversity and is committed to creating an inclusive environment for all employees. Our approach helps us to build a winning team that represents a variety of backgrounds, perspectives, and abilities. So, regardless of how your diversity expresses itself, you can find a home here at phData. We are proud to be an equal opportunity employer. We prohibit discrimination and harassment of any kind based on race, color, religion, national origin, sex (including pregnancy), sexual orientation, gender identity, gender expression, age, veteran status, genetic information, disability, or other applicable legally protected characteristics. If you would like to request an accommodation due to a disability, please contact us at People Operations.
Posted 4 days ago
5.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Sonatype is the software supply chain security company. We provide the world’s best end-to-end software supply chain security solution, combining the only proactive protection against malicious open source, the only enterprise grade SBOM management and the leading open source dependency management platform. This empowers enterprises to create and maintain secure, quality, and innovative software at scale. As founders of Nexus Repository and stewards of Maven Central, the world’s largest repository of Java open-source software, we are software pioneers and our open source expertise is unmatched. We empower innovation with an unparalleled commitment to build faster, safer software and harness AI and data intelligence to mitigate risk, maximize efficiencies, and drive powerful software development. More than 2,000 organizations, including 70% of the Fortune 100 and 15 million software developers, rely on Sonatype to optimize their software supply chains. The Opportunity We’re looking for a Senior Data Engineer to join our growing Data Platform team. You’ll play a key role in designing and scaling the infrastructure and pipelines that power analytics, machine learning, and business intelligence across Sonatype. You’ll work closely with stakeholders across product, engineering, and business teams to ensure data is reliable, accessible, and actionable. This role is ideal for someone who thrives on solving complex data challenges at scale and enjoys building high-quality, maintainable systems. What You’ll Do Design, build, and maintain scalable data pipelines and ETL/ELT processes Architect and optimize data models and storage solutions for analytics and operational use Collaborate with data scientists, analysts, and engineers to deliver trusted, high-quality datasets Own and evolve parts of our data platform (e.g., Airflow, dbt, Spark, Redshift, or Snowflake) Implement observability, alerting, and data quality monitoring for critical pipelines Drive best practices in data engineering, including documentation, testing, and CI/CD Contribute to the design and evolution of our next-generation data lakehouse architecture Minimum Qualifications 5+ years of experience as a Data Engineer or in a similar backend engineering role Strong programming skills in Python, Scala, or Java Hands-on experience with HBase or similar NoSQL columnar stores Hands-on experience with distributed data systems like Spark, Kafka, or Flink Proficient in writing complex SQL and optimizing queries for performance Experience building and maintaining robust ETL/ELT (Data Warehousing) pipelines in production Familiarity with workflow orchestration tools (Airflow, Dagster, or similar) Understanding of data modeling techniques (star schema, dimensional modeling, etc.) Bonus Points Experience working with Databricks, dbt, Terraform, or Kubernetes Familiarity with streaming data pipelines or real-time processing Exposure to data governance frameworks and tools Experience supporting data products or ML pipelines in production Strong understanding of data privacy, security, and compliance best practices Why You’ll Love Working Here Data with purpose: Work on problems that directly impact how the world builds secure software Modern tooling: Leverage the best of open-source and cloud-native technologies Collaborative culture: Join a passionate team that values learning, autonomy, and impact At Sonatype, we value diversity and inclusivity. We offer perks such as parental leave, diversity and inclusion working groups, and flexible working practices to allow our employees to show up as their whole selves. We are an equal-opportunity employer, and we do not discriminate on the basis of race, religion, color, national origin, gender, sexual orientation, age, marital status, veteran status, or disability status. If you have a disability or special need that requires accommodation, please do not hesitate to let us know.
Posted 4 days ago
5.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Sonatype is the software supply chain security company. We provide the world’s best end-to-end software supply chain security solution, combining the only proactive protection against malicious open source, the only enterprise grade SBOM management and the leading open source dependency management platform. This empowers enterprises to create and maintain secure, quality, and innovative software at scale. As founders of Nexus Repository and stewards of Maven Central, the world’s largest repository of Java open-source software, we are software pioneers and our open source expertise is unmatched. We empower innovation with an unparalleled commitment to build faster, safer software and harness AI and data intelligence to mitigate risk, maximize efficiencies, and drive powerful software development. More than 2,000 organizations, including 70% of the Fortune 100 and 15 million software developers, rely on Sonatype to optimize their software supply chains. The Opportunity We’re looking for a Senior Data Engineer to join our growing Data Platform team. This role is a hybrid of data engineering and business intelligence, ideal for someone who enjoys solving complex data challenges while also building intuitive and actionable reporting solutions. You’ll play a key role in designing and scaling the infrastructure and pipelines that power analytics, dashboards, machine learning, and decision-making across Sonatype. You’ll also be responsible for delivering clear, compelling, and insightful business intelligence through tools like Looker Studio and advanced SQL queries. What You’ll Do Design, build, and maintain scalable data pipelines and ETL/ELT processes. Architect and optimize data models and storage solutions for analytics and operational use. Create and manage business intelligence reports and dashboards using tools like Looker Studio, Power BI, or similar. Collaborate with data scientists, analysts, and stakeholders to ensure datasets are reliable, meaningful, and actionable. Own and evolve parts of our data platform (e.g., Airflow, dbt, Spark, Redshift, or Snowflake). Write complex, high-performance SQL queries to support reporting and analytics needs. Implement observability, alerting, and data quality monitoring for critical pipelines. Drive best practices in data engineering and business intelligence, including documentation, testing, and CI/CD. Contribute to the evolution of our next-generation data lakehouse and BI architecture. What We’re Looking For 5+ years of experience as a Data Engineer or in a hybrid data/reporting role. Strong programming skills in Python, Java, or Scala. Proficiency with data tools such as Databricks, data modeling techniques (e.g., star schema, dimensional modeling), and data warehousing solutions like Snowflake or Redshift. Hands-on experience with modern data platforms and orchestration tools (e.g., Spark, Kafka, Airflow). Proficient in SQL with experience in writing and optimizing complex queries for BI and analytics. Experience with BI tools such as Looker Studio, Power BI, or Tableau. Experience in building and maintaining robust ETL/ELT pipelines in production. Understanding of data quality, observability, and governance best practices. 5+ years of experience as a Data Engineer or in a hybrid data/reporting role. Strong programming skills in Python, Java, or Scala. Hands-on experience with modern data platforms and orchestration tools (e.g., Spark, Kafka, Airflow). Proficient in SQL with experience in writing and optimizing complex queries for BI and analytics. Experience with BI tools such as Looker Studio, Power BI, or Tableau. Familiarity with data modeling techniques (star schema, dimensional modeling, etc.). Experience in building and maintaining robust ETL/ELT pipelines in production. Understanding data quality, observability, and governance best practices. Bonus Points Experience with dbt, Terraform, or Kubernetes. Familiarity with real-time data processing or streaming architectures. Understanding of data privacy, compliance, and security best practices in analytics and reporting. Why You’ll Love Working Here Data with purpose: Work on problems that directly impact how the world builds secure software. Full-spectrum impact: Use both engineering and analytical skills to shape product, strategy, and operations. Modern tooling: Leverage the best of open-source and cloud-native technologies. Collaborative culture: Join a passionate team that values learning, autonomy, and real-world impact. At Sonatype, we value diversity and inclusivity. We offer perks such as parental leave, diversity and inclusion working groups, and flexible working practices to allow our employees to show up as their whole selves. We are an equal-opportunity employer, and we do not discriminate on the basis of race, religion, color, national origin, gender, sexual orientation, age, marital status, veteran status, or disability status. If you have a disability or special need that requires accommodation, please do not hesitate to let us know.
Posted 4 days ago
6.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Sonatype is the software supply chain security company. We provide the world’s best end-to-end software supply chain security solution, combining the only proactive protection against malicious open source, the only enterprise grade SBOM management and the leading open source dependency management platform. This empowers enterprises to create and maintain secure, quality, and innovative software at scale. As founders of Nexus Repository and stewards of Maven Central, the world’s largest repository of Java open-source software, we are software pioneers and our open source expertise is unmatched. We empower innovation with an unparalleled commitment to build faster, safer software and harness AI and data intelligence to mitigate risk, maximize efficiencies, and drive powerful software development. More than 2,000 organizations, including 70% of the Fortune 100 and 15 million software developers, rely on Sonatype to optimize their software supply chains. About The Role The Engineering Manager – Data role at Sonatype blends hands-on data engineering with leadership and strategic influence. You will lead high-performing data engineering teams to build the infrastructure, pipelines, and systems that fuel analytics, business intelligence, and machine learning across our global products. We’re looking for a leader who brings deep technical experience in modern data platforms, is fluent in programming, and understands the nuances of open-source consumption and software supply chain security. This hybrid role is based out of our Hyderabad office. What You’ll Do Lead, mentor, and grow a team of data engineers responsible for building scalable, secure, and maintainable data solutions. Design and architect data pipelines, Lakehouse systems, and warehouse models using tools such as Databricks, Airflow, Spark, and Snowflake/Redshift. Stay hands-on—write, review, and guide production-level code in Python, Java, or similar languages. Ensure strong foundations in data modeling, governance, observability, and data quality. Collaborate with cross-functional teams including Product, Security, Engineering, and Data Science to translate business needs into data strategies and deliverables. Apply your knowledge of open-source component usage, dependency management, and software composition analysis to ensure our data platforms support secure development practices. Embed application security principles into data platform design, supporting Sonatype’s mission to secure the software supply chain. Foster an engineering culture that prioritizes continuous improvement, technical excellence, and team ownership. Who You Are A technical leader with a strong background in data engineering, platform design, and secure software development. Comfortable operating across domains—data infrastructure, programming, architecture, security, and team leadership. Passionate about delivering high-impact results through technical contributions, mentoring, and strategic thinking. Familiar with modern data engineering practices, open-source ecosystems, and the challenges of managing data securely on a scale. A collaborative communicator who thrives in hybrid and cross-functional team environments. What You Need 6+ years of experience in data engineering, backend systems, or infrastructure development. 2+ year of experience in a technical leadership or engineering management role with hands-on contribution. Expertise in data technologies: Databricks, Spark, Airflow, Snowflake/Redshift, dbt, etc. Strong programming skills in Python, Java, or Scala with experience building robust, production-grade systems. Experience in data modeling (dimensional modeling, star/snowflake schema), data warehousing, and ELT/ETL pipeline development. Understanding software dependency management and open-source consumption patterns. Familiarity with application security principles and a strong interest in secure software supply chains. Experience supporting real-time data systems or streaming architectures. Exposure to machine learning pipelines or data productization. Experience with tools like Terraform, Kubernetes, and CI/CD for data engineering workflows. Knowledge of data governance frameworks and regulatory compliance (GDPR, SOC2, etc.). Why Join Us? Help secure the software supply chain for millions of developers worldwide. Build meaningful software in a collaborative, fast-moving environment with strong technical peers. Stay hands-on while leading—technical leadership is part of the job, not separate from it. Join a global engineering organization with deep local roots and a strong team culture. Competitive salary, great benefits, and opportunities for growth and innovation. At Sonatype, we value diversity and inclusivity. We offer perks such as parental leave, diversity and inclusion working groups, and flexible working practices to allow our employees to show up as their whole selves. We are an equal-opportunity employer, and we do not discriminate on the basis of race, religion, color, national origin, gender, sexual orientation, age, marital status, veteran status, or disability status. If you have a disability or special need that requires accommodation, please do not hesitate to let us know.
Posted 4 days ago
4.0 years
0 Lacs
Pune, Maharashtra, India
On-site
The Role The Data Engineer is accountable for developing high quality data products to support the Bank’s regulatory requirements and data driven decision making. A Mantas Scenario Developer will serve as an example to other team members, work closely with customers, and remove or escalate roadblocks. By applying their knowledge of data architecture standards, data warehousing, data structures, and business intelligence they will contribute to business outcomes on an agile team. Responsibilities Developing and supporting scalable, extensible, and highly available data solutions Deliver on critical business priorities while ensuring alignment with the wider architectural vision Identify and help address potential risks in the data supply chain Follow and contribute to technical standards Design and develop analytical data models Required Qualifications & Work Experience First Class Degree in Engineering/Technology (4-year graduate course) 3 to 4 years’ experience implementing data-intensive solutions using agile methodologies Experience of relational databases and using SQL for data querying, transformation and manipulation Experience of modelling data for analytical consumers Hands on Mantas (Oracle FCCM) Scenario Development experience throughout the full development life cycle Ability to automate and streamline the build, test and deployment of data pipelines Experience in cloud native technologies and patterns A passion for learning new technologies, and a desire for personal growth, through self-study, formal classes, or on-the-job training Excellent communication and problem-solving skills T echnical Skills (Must Have) ETL: Hands on experience of building data pipelines. Proficiency in at least one of the data integration platforms such as Ab Initio, Apache Spark, Talend and Informatica Mantas: Expert in Oracle Mantas/FCCM, Scenario Manager, Scenario Development, thorough knowledge and hands on experience in Mantas FSDM, DIS, Batch Scenario Manager Big Data: Exposure to ‘big data’ platforms such as Hadoop, Hive or Snowflake for data storage and processing Data Warehousing & Database Management: Understanding of Data Warehousing concepts, Relational (Oracle, MSSQL, MySQL) and NoSQL (MongoDB, DynamoDB) database design Data Modeling & Design: Good exposure to data modeling techniques; design, optimization and maintenance of data models and data structures Languages: Proficient in one or more programming languages commonly used in data engineering such as Python, Java or Scala DevOps: Exposure to concepts and enablers - CI/CD platforms, version control, automated quality control management Technical Skills (Valuable) Ab Initio: Experience developing Co>Op graphs; ability to tune for performance. Demonstrable knowledge across full suite of Ab Initio toolsets e.g., GDE, Express>IT, Data Profiler and Conduct>IT, Control>Center, Continuous>Flows Cloud: Good exposure to public cloud data platforms such as S3, Snowflake, Redshift, Databricks, BigQuery, etc. Demonstrable understanding of underlying architectures and trade-offs Data Quality & Controls: Exposure to data validation, cleansing, enrichment and data controls Containerization: Fair understanding of containerization platforms like Docker, Kubernetes File Formats: Exposure in working on Event/File/Table Formats such as Avro, Parquet, Iceberg, Delta Others: Basics of Job scheduler like Autosys. Basics of Entitlement management Certification on any of the above topics would be an advantage. ------------------------------------------------------ Job Family Group: Technology ------------------------------------------------------ Job Family: Digital Software Engineering ------------------------------------------------------ Time Type: Full time ------------------------------------------------------ Most Relevant Skills Please see the requirements listed above. ------------------------------------------------------ Other Relevant Skills For complementary skills, please see above and/or contact the recruiter. ------------------------------------------------------ Citi is an equal opportunity employer, and qualified candidates will receive consideration without regard to their race, color, religion, sex, sexual orientation, gender identity, national origin, disability, status as a protected veteran, or any other characteristic protected by law. If you are a person with a disability and need a reasonable accommodation to use our search tools and/or apply for a career opportunity review Accessibility at Citi. View Citi’s EEO Policy Statement and the Know Your Rights poster.
Posted 4 days ago
0 years
0 Lacs
India
Remote
We Breathe Life Into Data At Komodo Health, our mission is to reduce the global burden of disease. And we believe that smarter use of data is essential to this mission. That’s why we built the Healthcare Map — the industry’s largest, most complete, precise view of the U.S. healthcare system — by combining de-identified, real-world patient data with innovative algorithms and decades of clinical experience. The Healthcare Map serves as our foundation for a powerful suite of software applications, helping us answer healthcare’s most complex questions for our partners. Across the healthcare ecosystem, we’re helping our clients unlock critical insights to track detailed patient behaviors and treatment patterns, identify gaps in care, address unmet patient needs, and reduce the global burden of disease. As we pursue these goals, it remains essential to us that we stay grounded in our values: be awesome, seek growth, deliver “wow,” and enjoy the ride. At Komodo, you will be joining a team of ambitious, supportive Dragons with diverse backgrounds but a shared passion to deliver on our mission to reduce the burden of disease — and enjoy the journey along the way. The Opportunity at Komodo Health Komodo aims to build the best healthcare data architecture in the industry. We are rapidly growing and expanding our infrastructure stack and adopting SRE best practices. You will be joining the Sentinel team. We build, operate, and constantly improve our infrastructure offerings for 150+ client organizations. The team achieves our business goals by adopting best infrastructure practices and leveraging 100% automation to scale our product offerings and reduce costs. You will work with passionate team members across the country, and to accomplish our goal: reduce the burden of disease. Looking back on your first 12 months at Komodo Health, you will have… Have developed a deep understanding of Sentinel and Sentinel’s customers Have built and operated cloud infrastructure (AWS) based on customer requirements Have solved any development and deployment challenges around making Sentinel’s infrastructure highly reliable, easy to maintain, and cost effective Have participated in the development, execution, and support of the new feature rollout with solution architects and customer success teams Have developed and contributed to existing and new monitoring and alerting systems for Sentinel infrastructure Have hardened infrastructure security including network, storage, user access etc. Have responded and solved key customer reported issues in a timely manner Participated in an on-call rotation What You Bring To Komodo Health Excited about automation, building scalable technical solutions, and being a team player Proficiency in at least one of the mainstream programming languages such as Python and/or Java, with deep technical troubleshooting skills Proficiency in Terraform, Scalr, and or other similar tools Experience with AWS’s core services and their relationships; ability to create solutions based on users’ high-level descriptions, learn new cloud technologies and use them as needed, etc. Hands-on experience in building CI/CD pipelines using GitActions, Jenkins, etc. Hands-on experience in networking, subnets, CIDR, etc. that are applicable to deploying applications and making sure they are accessible to our users across the globe Hands-on experience with scripting (bash and/or power shell) Knowledge of operating system basics (Linux and/or Windows) Knowledge of main cloud vendors and big-data tools and frameworks like Snowflake, Airflow, and/or Spark Working knowledge of data modeling, schema design, data storage with relational (e.g. PostgreSQL), NoSQL(e.g. DynamoDB, Redis), MPP databases (e.g. Snowflake, Redshift, BigQuery) Excellent cross-team communication and collaboration skills, with the ability to initiate and effectively drive projects proactively Additional skills and experience we’d prioritize (nice to have)… Experience with AWS preferred. AWS Cloud Infrastructure certification is a plus. Backend development experience such as building APIs and micro services using Python, Java, or any other mainstream programming languages Experience with data privacy concerns such as HIPAA or GDPR Experience working with cross-functional teams and with other customer-facing teams Passion! We hope you are passionate about our mission and technology Ownership! We hope you own your work, be accountable, and push it through the finish line. We hope you treat yourself as a cofounder and do not hesitate to share any idea that helps Komodo Expertise! We do not need you to know everything, but we hope you have deep knowledge in at least one area and can start contributing quickly. And we would love to learn from you in your area(s) of expertise as well Komodo's AI Standard At Komodo, we're not just witnessing the AI revolution – we're leading it. This is a pivotal moment in time, where being first to market with AI transforms industries and sets the bar. We've already established industry leadership in leveraging AI to revolutionize healthcare, and we expect every team member to contribute. AI here isn't optional; it's foundational. We expect you to integrate AI into your daily work – from summarizing documents to automating workflows and uncovering insights. This isn't just about efficiency; it's about making every moment more meaningful, building on trust in AI, and driving our collective success. Join us in shaping the future of healthcare intelligence. Where You’ll Work Komodo Health has a hybrid work model; we recognize the power of choice and importance of flexibility for the well-being of both our company and our individual Dragons. Roles may be completely remote based anywhere in the country listed, remote but based in a specific region, or local (commuting distance) to one of our hubs in San Francisco, New York City, or Chicago with remote work options. What We Offer Positions may be eligible for company benefits in accordance with Company policy. We offer a competitive total rewards package including medical, dental and vision coverage along with a broad range of supplemental benefits including 401k Retirement Plan, prepaid legal assistance, and more. We also offer paid time off for vacation, sickness, holiday, and bereavement. We are pleased to be able to provide 100% company-paid life insurance and long-term disability insurance. This information is intended to be a general overview and may be modified by the Company due to business-related factors. Equal Opportunity Statement Komodo Health provides equal employment opportunities to all applicants and employees. We prohibit discrimination and harassment of any type with regard to race, color, religion, age, sex, national origin, disability status, genetics, protected veteran status, sexual orientation, gender identity or expression, or any other characteristic protected by federal, state, or local laws.
Posted 4 days ago
6.0 - 9.0 years
0 Lacs
India
Remote
About Juniper Square Our mission is to unlock the full potential of private markets. Privately owned assets like commercial real estate, private equity, and venture capital make up half of our financial ecosystem yet remain inaccessible to most people. We are digitizing these markets, and as a result, bringing efficiency, transparency, and access to one of the most productive corners of our financial ecosystem. If you care about making the world a better place by making markets work better through technology – all while contributing as a member of a values-driven organization – we want to hear from you. Juniper Square offers employees a variety of ways to work, ranging from a fully remote experience to working full-time in one of our physical offices. We invest heavily in digital-first operations, allowing our teams to collaborate effectively across 27 U.S. states, 2 Canadian Provinces, India, Luxembourg, and England. We also have a physical offices in San Francisco, New York City, Mumbai and Bangalore for employees who prefer to work in an office some or all of the time. What You’ll Do Design and architect complex systems with the team, actively participating in design reviews. Lead and mentor a team of junior developers, fostering their growth and development. Ensure high quality in team deliverables through guidance, code reviews, and setting best practices. Collaborate with cross-functional partners (Product, UX, QA) to ensure the team meets project timelines. Own monitoring, diagnosing, and resolving production issues within BizOps systems. Contribute to large-scale, complex projects, and execute development tasks through completion. Perform code reviews to uphold high quality and standards across codebases. Provide technical support for stakeholder groups, including Customer Success. Work closely with QA to maintain software quality and increase automation coverage. Qualifications Bachelor's degree in Computer Science or equivalent work experience 6 to 9 years of experience building cloud-based web applications. Previous experience of leading a team will be a plus. Expertise in object-oriented programming (OOP) such as Python, Java or similar server-side web application development languages. Experience with front end technologies like React, CSS frameworks, HTML and Javascript Experience with SQL database schema design and query optimization Experience with Cloud technologies (AWS preferred) and Container technologies (Docker and k8s) Experience with Data warehousing technologies like RedShift or knowledge of time series DB is a plus. Experience with GraphQL, Apollo Server, and NestJS is a plus but not required You must be flexible and adaptable—you will be operating in a fast-paced startup environment At Juniper Square, we believe building a diverse workforce and an inclusive culture makes us a better company. If you think this job sounds like a fit, we encourage you to apply even if you don’t meet all the qualifications.
Posted 4 days ago
7.0 years
0 Lacs
India
On-site
At 3Pillar, our focus is on leveraging cutting-edge technologies that revolutionize industries by enabling data driven decision making. As a Senior Data Engineer, you will hold a crucial position within our dynamic team, actively contributing to thrilling projects that reshape data analytics for our clients, providing them with a competitive advantage in their respective Industries. if your passion for data analytics solutions that make a real-world impact, consider this your pass to the captivating world of Data Science and Engineering! 🔮🌐 Minimum Qualification Total IT Exp should be 7+ Years. Demonstrated expertise with a minimum of 5+ years of experience as data engineer or similar role Advanced SQL skills and experience with relational databases and database design. Experience working with cloud Data Warehouse solutions (e.g., Snowflake, Redshift, BigQuery, Azure Synapse, etc.). Strong Python skills with hands-on experience on Pandas, NumPy and other data related libraries Experience with Big Data technologies like Spark, Map Reduce, Hadoop, Hive etc Proficient in data pipeline and workflow management tools e.g. Airflow Experience with the AWS data engineering services Knowledge of AWS services viz., S3, Lambda, EMR, GLUE ETL, Athena, RDS, Redshift, EC2, IAM, AWS Kinesis Very good exposure of working on Data Lakes & Data Warehouses solutions Excellent problem-solving, communication, and organizational skills. Proven ability to work independently and with a team. Ability to guide other data engineers.
Posted 4 days ago
10.0 years
15 - 17 Lacs
India
Remote
Note: This is a remote role with occasional office visits. Candidates from Mumbai or Pune will be preferred About The Company A fast-growing enterprise technology consultancy operating at the intersection of cloud computing, big-data engineering and advanced analytics . The team builds high-throughput, real-time data platforms that power AI, BI and digital products for Fortune 500 clients across finance, retail and healthcare. By combining Databricks Lakehouse architecture with modern DevOps practices, they unlock insight at petabyte scale while meeting stringent security and performance SLAs. Role & Responsibilities Architect end-to-end data pipelines (ingestion → transformation → consumption) using Databricks, Spark and cloud object storage. Design scalable data warehouses/marts that enable self-service analytics and ML workloads. Translate logical data models into physical schemas; own database design, partitioning and lifecycle management for cost-efficient performance. Implement, automate and monitor ETL/ELT workflows, ensuring reliability, observability and robust error handling. Tune Spark jobs and SQL queries, optimizing cluster configurations and indexing strategies to achieve sub-second response times. Provide production support and continuous improvement for existing data assets, championing best practices and mentoring peers. Skills & Qualifications Must-Have 6–10 years building production-grade data platforms, including 3 years+ hands-on Apache Spark/Databricks experience. Expert proficiency in PySpark, Python and advanced SQL, with a track record of performance-tuning distributed jobs. Demonstrated ability to model data warehouses/marts and orchestrate ETL/ELT pipelines with tools such as Airflow or dbt. Hands-on with at least one major cloud platform (AWS or Azure) and modern lakehouse / data-lake patterns. Strong problem-solving skills, DevOps mindset and commitment to code quality; comfortable mentoring fellow engineers. Preferred Deep familiarity with the AWS analytics stack (Redshift, Glue, S3) or the broader Hadoop ecosystem. Bachelor’s or Master’s degree in Computer Science, Engineering or a related field. Experience building streaming pipelines (Kafka, Kinesis, Delta Live Tables) and real-time analytics solutions. Exposure to ML feature stores, MLOps workflows and data-governance/compliance frameworks. Relevant professional certifications (Databricks, AWS, Azure) or notable open-source contributions. Benefits & Culture Highlights Remote-first & flexible hours with 25+ PTO days and comprehensive health cover. Annual training budget & certification sponsorship (Databricks, AWS, Azure) to fuel continuous learning. Inclusive, impact-focused culture where engineers shape the technical roadmap and mentor a vibrant data community Skills: data modeling,big data technologies,team leadership,agile methodologies,performance tuning,data,aws,airflow
Posted 4 days ago
10.0 years
15 - 17 Lacs
India
Remote
Note: This is a remote role with occasional office visits. Candidates from Mumbai or Pune will be preferred About The Company A fast-growing enterprise technology consultancy operating at the intersection of cloud computing, big-data engineering and advanced analytics . The team builds high-throughput, real-time data platforms that power AI, BI and digital products for Fortune 500 clients across finance, retail and healthcare. By combining Databricks Lakehouse architecture with modern DevOps practices, they unlock insight at petabyte scale while meeting stringent security and performance SLAs. Role & Responsibilities Architect end-to-end data pipelines (ingestion → transformation → consumption) using Databricks, Spark and cloud object storage. Design scalable data warehouses/marts that enable self-service analytics and ML workloads. Translate logical data models into physical schemas; own database design, partitioning and lifecycle management for cost-efficient performance. Implement, automate and monitor ETL/ELT workflows, ensuring reliability, observability and robust error handling. Tune Spark jobs and SQL queries, optimizing cluster configurations and indexing strategies to achieve sub-second response times. Provide production support and continuous improvement for existing data assets, championing best practices and mentoring peers. Skills & Qualifications Must-Have 6–10 years building production-grade data platforms, including 3 years+ hands-on Apache Spark/Databricks experience. Expert proficiency in PySpark, Python and advanced SQL, with a track record of performance-tuning distributed jobs. Demonstrated ability to model data warehouses/marts and orchestrate ETL/ELT pipelines with tools such as Airflow or dbt. Hands-on with at least one major cloud platform (AWS or Azure) and modern lakehouse / data-lake patterns. Strong problem-solving skills, DevOps mindset and commitment to code quality; comfortable mentoring fellow engineers. Preferred Deep familiarity with the AWS analytics stack (Redshift, Glue, S3) or the broader Hadoop ecosystem. Bachelor’s or Master’s degree in Computer Science, Engineering or a related field. Experience building streaming pipelines (Kafka, Kinesis, Delta Live Tables) and real-time analytics solutions. Exposure to ML feature stores, MLOps workflows and data-governance/compliance frameworks. Relevant professional certifications (Databricks, AWS, Azure) or notable open-source contributions. Benefits & Culture Highlights Remote-first & flexible hours with 25+ PTO days and comprehensive health cover. Annual training budget & certification sponsorship (Databricks, AWS, Azure) to fuel continuous learning. Inclusive, impact-focused culture where engineers shape the technical roadmap and mentor a vibrant data community Skills: data modeling,big data technologies,team leadership,agile methodologies,performance tuning,data,aws,airflow
Posted 4 days ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
20312 Jobs | Dublin
Wipro
11977 Jobs | Bengaluru
EY
8165 Jobs | London
Accenture in India
6667 Jobs | Dublin 2
Uplers
6464 Jobs | Ahmedabad
Amazon
6352 Jobs | Seattle,WA
Oracle
5993 Jobs | Redwood City
IBM
5803 Jobs | Armonk
Capgemini
3897 Jobs | Paris,France
Tata Consultancy Services
3776 Jobs | Thane