Jobs
Interviews

3597 Redshift Jobs - Page 49

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

5.0 years

4 - 9 Lacs

Gurgaon

On-site

Location Gurugram, India Employment Type Full time Department Engineering About us: SentiLink provides innovative identity and risk solutions, empowering institutions and individuals to transact confidently with one another. By building the future of identity verification in the United States and reinventing the currently clunky, ineffective, and expensive process, we believe strongly that the future will be 10x better. We’ve had tremendous traction and are growing extremely quickly. Already our real-time APIs have helped verify hundreds of millions of identities, beginning with financial services. In 2021, we raised a $70M Series B round, led by Craft Ventures to rapidly scale our best in class products. We’ve earned coverage and awards from TechCrunch, CNBC, Bloomberg, Forbes, Business Insider, PYMNTS, American Banker, LendIt, and have been named to the Forbes Fintech 50 list consecutively since 2023. Last but not least, we’ve even been a part of history - we were the first company to go live with the eCBSV and testified before the United States House of Representatives. About the Role Are you passionate about creating world-class solutions that fuel product stability and continuously improve infrastructure operations? We’re looking for a driven Infrastructure Engineer to architect, implement, and maintain powerful observability systems that safeguard the performance and reliability of our most critical systems. In this role, you’ll take real ownership—collaborating with cross-functional teams to shape best-in-class observability standards, troubleshoot complex issues, and fine-tune monitoring tools to exceed SLA requirements. If you’re ready to design high-quality solutions, influence our technology roadmap, and make a lasting impact on our product’s success, we want to meet you! Responsibilities: Improve alerting across SentiLink systems and services, developing high quality monitoring capabilities while actively reducing false positives. Troubleshoot, debug, and resolve infrastructure issues as they arise; participate in on-call rotations for production issues. Define and refine Service Level Indicators (SLI), Service Level Objectives (SLO), and Service Level Agreements (SLA) in collaboration with product and engineering teams. Develop monitoring and alerting configurations using IaC solutions such as Terraform. Build and maintain dashboards to provide visibility into system performance and reliability. Collaborate with engineering teams to improve root cause analysis processes and reduce Mean Time to Recovery (MTTR). Drive cost optimization for observability tools like Datadog, CloudWatch, and Sumo Logic. Perform capacity testing to determine a deep understanding of infrastructure performance under load. Develop alerting based on learnings. Oversee, develop, and operate Kubernetes and service mesh infrastructure, ensuring smooth performance and reliability Investigate operational alerts, identify root causes, and compile comprehensive root cause analysis reports. Pursue action items relentlessly until they are thoroughly completed Conduct in-depth examinations of database operational issues, actively developing and improving database architecture, schema, and configuration for enhanced performance and reliability Develop and maintain incident response runbooks and improve processes to minimize service downtime. Research and evaluate new observability tools and technologies to enhance system monitoring. Requirements: 5 years of experience in cloud infrastructure, DevOps, or systems engineering. Expertise in AWS and infrastructure-as-code development. Experience with CI/CD pipelines and automation tools. Experience managing observability platforms, building monitoring dashboards, and configuring high quality, actionable alerting Strong understanding of Linux systems and networking. Familiarity with container orchestration tools like Kubernetes or Docker. Excellent analytical and problem-solving skills. Experience operating enterprise-size databases. Postgres, Aurora, Redshift, and OpenSearch experience is a plus Experience with Python or Golang is a plus Perks: Employer paid group health insurance for you and your dependents 401(k) plan with employer match (or equivalent for non US-based roles) Flexible paid time off Regular company-wide in-person events Home office stipend, and more! Corporate Values: Follow Through Deep Understanding Whatever It Takes Do Something Smart

Posted 4 weeks ago

Apply

2.0 years

0 Lacs

Kolkata metropolitan area, West Bengal, India

Remote

Job Title: BI (Business Intelligence) Engineer (Contract to Hire) Location: Hybrid (Remote + Onsite in Kolkata Office) Duration : 3-Month Full-Time Contract (Potential to convert to Full-Time Employee) Start Date : July, 2025 onwards About the Role: · We are looking for a talented and detail-oriented BI (Business Intelligence) Engineer to join our team on a 3-month full-time contract basis, with the potential to convert into a permanent role based on performance and business needs. · The ideal candidate will have hands-on experience in designing, building, and maintaining interactive dashboards and reports using tools like Tableau, Power BI, and Looker Studio, and possess strong skills in working with SQL and relational databases . · You will work closely with cross-functional teams to turn raw data into meaningful insights that support business decision-making. Key Responsibilities: · Design and develop interactive dashboards and reports using Tableau, Power BI, Looker Studio, or similar BI tools . · Collaborate with business and technical stakeholders to gather requirements and define KPIs. · Write optimized SQL queries to extract and manipulate data from various databases . · Ensure data accuracy, consistency, and integrity across reports and dashboards. · Analyze data trends and provide actionable insights to support business operations. · Document data models, report logic, and dashboard designs for maintainability. Required Skills & Qualifications: a. Education & Experience: · Bachelor’s or Master’s degree in Computer Science, Information Systems, Business Analytics , or a related field. · 2+ years of experience in BI/reporting roles or relevant project-based experience. b. Technical Experience: · Hands-on experience with one or more of the following BI tools: Tableau, Power BI, Looker Studio (Google Data Studio). · Strong proficiency in SQL and experience working with relational databases (e.g., MySQL, PostgreSQL, BigQuery, etc.). · Understanding of data modeling, joins, data cleansing, and transformation techniques. · Familiarity with cloud-based data warehouses (e.g., BigQuery, Snowflake, AWS Redshift) is a plus. · Knowledge of Excel, Google Sheets , and scripting (Python or R) for data manipulation is a bonus. c. Soft Skills: · Strong analytical thinking and problem-solving mindset. · Good communication skills to present data clearly to both technical and non-technical audiences. · Self-motivated, detail-oriented, and able to work independently in a hybrid environment Nice to have Skills: · Experience in data storytelling or dashboard UX/UI design best practices . · Exposure to data governance and access control in BI tools. · Basic understanding of Japanese is a plus. Work Arrangement: Hybrid : Primarily remote, with occasional onsite meetings at our Kolkata office. Must be available to work from Kolkata office when required. Contract & Future Opportunity: · Initial Engagement : 3 months contract with full-time (100%) commitment. · Future Opportunity : High potential for conversion to a full-time permanent role, depending on performance and business needs

Posted 4 weeks ago

Apply

0 years

0 Lacs

Andhra Pradesh

On-site

Job Summary: We are looking for a skilled AWS Data Engineer with strong experience in building and managing cloud-based ETL pipelines using AWS Glue, Python/PySpark, and Athena, along with data warehousing expertise in Amazon Redshift. The ideal candidate will be responsible for designing, developing, and maintaining scalable data solutions in a cloud-native environment. Design and implement ETL workflows using AWS Glue, Python, and PySpark. Develop and optimize queries using Amazon Athena and Redshift. Build scalable data pipelines to ingest, transform, and load data from various sources. Ensure data quality, integrity, and security across AWS services. Collaborate with data analysts, data scientists, and business stakeholders to deliver data solutions. Monitor and troubleshoot ETL jobs and cloud infrastructure performance. Automate data workflows and integrate with CI/CD pipelines. Required Skills & Qualifications: Hands-on experience with AWS Glue, Athena, and Redshift. Strong programming skills in Python and PySpark. Experience with ETL design, implementation, and optimization. Familiarity with S3, Lambda, CloudWatch, and other AWS services. Understanding of data warehousing concepts and performance tuning in Redshift. Experience with schema design, partitioning, and query optimization in Athena. Proficiency in version control (Git) and agile development practices. About Virtusa Teamwork, quality of life, professional and personal development: values that Virtusa is proud to embody. When you join us, you join a team of 27,000 people globally that cares about your growth — one that seeks to provide you with exciting projects, opportunities and work with state of the art technologies throughout your career with us. Great minds, great potential: it all comes together at Virtusa. We value collaboration and the team environment of our company, and seek to provide great minds with a dynamic place to nurture new ideas and foster excellence. Virtusa was founded on principles of equal opportunity for all, and so does not discriminate on the basis of race, religion, color, sex, gender identity, sexual orientation, age, non-disqualifying physical or mental disability, national origin, veteran status or any other basis covered by appropriate law. All employment is decided on the basis of qualifications, merit, and business need.

Posted 4 weeks ago

Apply

0 years

0 Lacs

Andhra Pradesh

On-site

BI-Tableau desktop, Tableau Reports and Dashboard Design, Data visualization and analysis, Tableau Server, Tableau Reader, Cognos Report Studio, Query Studio, Cognos Connection is a plus Languages- SQL, PL SQL, T-SQL, SQL Plus, SAS Base a plus Perform complex MS excel operations Pivot table, filter operations on the underlying data Knowledge of reporting tools like Qlik Sense, Qlik view and statistical tools like Advanced Excel (v-lookup, charts, dashboard design), Visual Basic using visual studio, MS Access is a plus Possess ability for critical thinking, analysis, good interpersonal and communication skills. Ability to adapt and learn new technologies and get quickly proficient with them. Data mining experience Blended data from multiple resources like flat files, Excel, Oracle, and Tableau server environment Used Cloud sources like Amazon AWS Redshift, Snowflake, Google Drive, MS Excel, Oracle About Virtusa Teamwork, quality of life, professional and personal development: values that Virtusa is proud to embody. When you join us, you join a team of 27,000 people globally that cares about your growth — one that seeks to provide you with exciting projects, opportunities and work with state of the art technologies throughout your career with us. Great minds, great potential: it all comes together at Virtusa. We value collaboration and the team environment of our company, and seek to provide great minds with a dynamic place to nurture new ideas and foster excellence. Virtusa was founded on principles of equal opportunity for all, and so does not discriminate on the basis of race, religion, color, sex, gender identity, sexual orientation, age, non-disqualifying physical or mental disability, national origin, veteran status or any other basis covered by appropriate law. All employment is decided on the basis of qualifications, merit, and business need.

Posted 4 weeks ago

Apply

9.0 years

0 Lacs

Andhra Pradesh

On-site

Data Engineer Must have 9+ years of experience in below mentioned skills. Must Have: Big Data Concepts Python(Core Python- Able to write code), SQL, Shell Scripting, AWS S3 Good to Have: Event-driven/AWA SQS, Microservices, API Development,Kafka, Kubernetes, Argo, Amazon Redshift, Amazon Aurora About Virtusa Teamwork, quality of life, professional and personal development: values that Virtusa is proud to embody. When you join us, you join a team of 27,000 people globally that cares about your growth — one that seeks to provide you with exciting projects, opportunities and work with state of the art technologies throughout your career with us. Great minds, great potential: it all comes together at Virtusa. We value collaboration and the team environment of our company, and seek to provide great minds with a dynamic place to nurture new ideas and foster excellence. Virtusa was founded on principles of equal opportunity for all, and so does not discriminate on the basis of race, religion, color, sex, gender identity, sexual orientation, age, non-disqualifying physical or mental disability, national origin, veteran status or any other basis covered by appropriate law. All employment is decided on the basis of qualifications, merit, and business need.

Posted 4 weeks ago

Apply

4.0 years

0 Lacs

Kochi, Kerala, India

On-site

Introduction In this role, you'll work in one of our IBM Consulting Client Innovation Centers (Delivery Centers), where we deliver deep technical and industry expertise to a wide range of public and private sector clients around the world. Our delivery centers offer our clients locally based skills and technical expertise to drive innovation and adoption of new technology. Your Role And Responsibilities As Data Engineer, you will develop, maintain, evaluate and test big data solutions. You will be involved in the development of data solutions using Spark Framework with Python or Scala on Hadoop and AWS Cloud Data Platform Responsibilities Experienced in building data pipelines to Ingest, process, and transform data from files, streams and databases. Process the data with Spark, Python, PySpark, Scala, and Hive, Hbase or other NoSQL databases on Cloud Data Platforms (AWS) or HDFS Experienced in develop efficient software code for multiple use cases leveraging Spark Framework / using Python or Scala and Big Data technologies for various use cases built on the platform Experience in developing streaming pipelines Experience to work with Hadoop / AWS eco system components to implement scalable solutions to meet the ever-increasing data volumes, using big data/cloud technologies Apache Spark, Kafka, any Cloud computing etc Preferred Education Master's Degree Required Technical And Professional Expertise Minimum 4+ years of experience in Big Data technologies with extensive data engineering experience in Spark / Python or Scala ; Minimum 3 years of experience on Cloud Data Platforms on AWS; Experience in AWS EMR / AWS Glue / DataBricks, AWS RedShift, DynamoDB Good to excellent SQL skills Exposure to streaming solutions and message brokers like Kafka technologies Preferred Technical And Professional Experience Certification in AWS and Data Bricks or Cloudera Spark Certified developers

Posted 4 weeks ago

Apply

0 years

0 Lacs

Andhra Pradesh, India

On-site

Job Summary We are looking for a skilled AWS Data Engineer with strong experience in building and managing cloud-based ETL pipelines using AWS Glue, Python/PySpark, and Athena, along with data warehousing expertise in Amazon Redshift. The ideal candidate will be responsible for designing, developing, and maintaining scalable data solutions in a cloud-native environment. Design and implement ETL workflows using AWS Glue, Python, and PySpark. Develop and optimize queries using Amazon Athena and Redshift. Build scalable data pipelines to ingest, transform, and load data from various sources. Ensure data quality, integrity, and security across AWS services. Collaborate with data analysts, data scientists, and business stakeholders to deliver data solutions. Monitor and troubleshoot ETL jobs and cloud infrastructure performance. Automate data workflows and integrate with CI/CD pipelines. Required Skills & Qualifications Hands-on experience with AWS Glue, Athena, and Redshift. Strong programming skills in Python and PySpark. Experience with ETL design, implementation, and optimization. Familiarity with S3, Lambda, CloudWatch, and other AWS services. Understanding of data warehousing concepts and performance tuning in Redshift. Experience with schema design, partitioning, and query optimization in Athena. Proficiency in version control (Git) and agile development practices.

Posted 4 weeks ago

Apply

3.0 - 5.0 years

13 - 17 Lacs

Hyderabad, Chennai

Work from Office

Role & responsibilities Job Description: A detail-oriented and technically proficient Business Intelligence (BI) Engineer with strong Tableau expertise to support data analytics, dashboard development, and reporting initiatives. The ideal candidate has a solid background in SQL, data modeling, and visualization, with experience transforming raw data into actionable insights for business stakeholders Key Responsibilities • Design, build, and maintain Tableau dashboards and visualizations that communicate key business metrics. • Collaborate with business analysts, data engineers, and stakeholders to gather requirements and transform them into technical solutions. • Write and optimize SQL queries to extract, transform, and load data from various sources. • Support data quality, validation, and integrity across reports and dashboards. • Develop and maintain data models and ETL pipelines for BI use cases. • Perform ad hoc analyses and provide insights to business teams across departments (e.g., Marketing, Finance, Sales). • Assist in user training and documentation of BI solutions. • Participate in code reviews, version control, and agile sprint ceremonies (if applicable) Required Qualifications • 3-5 years of experience in BI engineering or data analytics roles. • Proficiency in Tableau (Desktop and Server) creating interactive dashboards, storyboards, and advanced charts. • Strong knowledge of SQL (PostgreSQL, MySQL, SQL Server, etc.) • Experience with data warehousing concepts and tools (e.g., Snowflake, Redshift, BigQuery, etc.) • Familiarity with ETL tools (e.g., Talend, Informatica, Apache Airflow, dbt) is a plus. • Understanding of data governance and security best practices. • Ability to translate business needs into scalable BI solutions Nice to Have: • Exposure to cloud platforms like AWS, Azure, or GCP. • Knowledge of Agile/Scrum methodology. • Experience in performance tuning of dashboards and SQL queries Preferred candidate profile

Posted 4 weeks ago

Apply

3.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Project Role : Application Lead Project Role Description : Lead the effort to design, build and configure applications, acting as the primary point of contact. Must have skills : Data Engineering Good to have skills : Microsoft SQL Server, Python (Programming Language), Snowflake Data Warehouse Minimum 3 Year(s) Of Experience Is Required Educational Qualification : 15 years full time education Summary: As a Senior Analyst, Data Engineering, you will be part of the Data and Analytics team, responsible for developing and delivering high-quality data assets and managing data domains for Personal Banking customers and colleagues. You will bring expertise in data handling, curation, and conformity, and support the design and development of data solutions that drive business value. You will work in an agile environment to build scalable and reliable data pipelines and platforms within a complex enterprise. Roles & Responsibilities: Hands-on development experience in Data Warehousing and/or Software Development. Utilize tools and best practices to build, verify, and deploy data solutions efficiently. Perform data integration and sourcing activities across various platforms. Develop data assets to support optimized analysis for customer and regulatory outcomes. Provide ongoing support for data platforms, including problem and incident management. Collaborate in Agile software development environments using tools like GitHub, Confluence, and Rally. Support continuous improvement and innovation in data engineering practices. Professional & Technical Skills: Must To Have Skills: Experience with cloud technologies, especially AWS (S3, Redshift, Airflow). Proficiency in DevOps and DataOps tools such as Jenkins, Git, and Erwin. Advanced skills in SQL and Python. Working knowledge of UNIX, Spark, and Databricks. Additional Information: Position: Senior Analyst, Data Engineering Reports to: Manager, Data Engineering Division: Personal Bank Group: 3 Industry/Domain Skills: Experience in Retail Banking, Business Banking, or Wealth Management preferred

Posted 4 weeks ago

Apply

26.0 years

0 Lacs

Mumbai Metropolitan Region

On-site

Title: Senior Software Engineer Location: Mumbai, India (roles in Andheri East and / or Turbhe) Description: We’re hiring Senior Software Engineers to lead the design and delivery of robust, scalable, and production-grade data systems across a high-growth, multi-venture environment. This is a high-impact leadership role for someone who thrives in hands-on build mode, and who can scale technical delivery without losing speed, quality, or ownership. If this is interesting, we would love to hear from you! About Blenheim Chalcot Blenheim Chalcot India is part of Blenheim Chalcot a global venture builder headquartered in London. With over 26 years of innovation, we've been at the forefront of creating some of the most groundbreaking GenAI-enabled companies. Our ventures lead the charge in digital disruption across a spectrum of industries, from FinTech to EdTech, GovTech to Media, and beyond. Our global presence spans the US, Europe, and Southeast Asia, with a portfolio that employs over 3,000 individuals, manages assets exceeding £1.8 billion, and boasts total portfolio sales of over £500 million. The role We’re hiring Senior Software Engineers to build scalable products, tools, and systems across our portfolio companies. This is a high-impact role for someone who thrives in hands-on engineering, writes maintainable code, and is deeply motivated by solving real-world problems with technology. You’ll work closely with product and engineering colleagues in both Mumbai and London, driving delivery within a specific business. If you're passionate about engineering craft, product-led development, and delivering systems that scale, we’d love to hear from you. Behaviours that we look for Solve problems rigorously, not reactively - breaking them down into tractable elements and considering multiple paths to resolution. Write clean, defensive code with clear structure and minimal complexity. Think about testing and observability early, automating meaningful tests and implementing relevant metrics/logging. Design systems for operations, not just development - considering CI/CD, resilience, cost, and scalability from day one. Use data to guide decisions, validating design impact, performance, and real user outcomes. Key responsibilities Working within our Engineering Centre of Excellence in Mumbai, you can expect to: Build and maintain scalable services, data pipelines, and backend systems. Write readable, efficient code and collaborate closely in code reviews. Automate robust test coverage (unit, integration, and/or contract tests). Implement data models used by analysts and ML teams; ensure schema evolution and data integrity are maintained. Participate in system design sessions and contribute to architectural discussions. Debug production issues across multiple services and layers of the stack. Contribute to CI/CD pipelines and infrastructure-aware development practices. Opportunity This is an excellent opportunity for experienced engineers looking to step up their scope and impact. You’ll join a collaborative team with exposure to modern engineering practices, GenAI-enabled systems, and business-critical data products. You'll be part of a global network, shaping real customer outcomes. About You We are seeking to onboard candidates with a proven track record in engineering, demonstrating strong technical leadership skills and a passion for building high-quality, scalable products. Excellent teamwork, adaptability, and a strategic mindset are essential to being successful in this role. The Ideal Candidate We are looking for candidates who bring: Strong proficiency in at least one modern programming language (e.g., Python, Java). Experience with both SQL and NoSQL databases; schema design and query optimisation. Comfort with cloud platforms such as AWS or Azure (e.g., S3, Redshift, Glue, Synapse). Familiarity with CI/CD practices, automated testing, and observability. Knowledge of data structures, algorithms, and their real-world trade-offs. A proactive mindset: ready to take ownership, iterate fast, and learn continuously. Great communication and teamwork skills – particularly in cross-functional settings. Process We have a rigorous, but streamlined recruitment process, which respects the time of candidates and portfolio companies alike. This process starts with a 15-minute call with a member of our Talent Acquisition team, followed by a meeting with representatives from BC's Engineering Centre of Excellence. Please note that our roles are primarily office based, with modern and well-connected office locations in both Andheri East and Navi Mumbai.

Posted 4 weeks ago

Apply

5.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Project Role : Application Lead Project Role Description : Lead the effort to design, build and configure applications, acting as the primary point of contact. Must have skills : Data Engineering Good to have skills : Microsoft SQL Server, Python (Programming Language), Snowflake Data Warehouse Minimum 5 Year(s) Of Experience Is Required Educational Qualification : 15 years full time education Summary: As a Senior Analyst, Data Engineering, you will be part of the Data and Analytics team, responsible for developing and delivering high-quality data assets and managing data domains for Personal Banking customers and colleagues. You will bring expertise in data handling, curation, and conformity, and support the design and development of data solutions that drive business value. You will work in an agile environment to build scalable and reliable data pipelines and platforms within a complex enterprise. Roles & Responsibilities: Hands-on development experience in Data Warehousing and/or Software Development. Utilize tools and best practices to build, verify, and deploy data solutions efficiently. Perform data integration and sourcing activities across various platforms. Develop data assets to support optimized analysis for customer and regulatory outcomes. Provide ongoing support for data platforms, including problem and incident management. Collaborate in Agile software development environments using tools like GitHub, Confluence, and Rally. Support continuous improvement and innovation in data engineering practices. Professional & Technical Skills: Must To Have Skills: Experience with cloud technologies, especially AWS (S3, Redshift, Airflow). Proficiency in DevOps and DataOps tools such as Jenkins, Git, and Erwin. Advanced skills in SQL and Python. Working knowledge of UNIX, Spark, and Databricks. Additional Information: Position: Senior Analyst, Data Engineering Reports to: Manager, Data Engineering Division: Personal Bank Group: 3 Industry/Domain Skills: Experience in Retail Banking, Business Banking, or Wealth Management preferred

Posted 4 weeks ago

Apply

5.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

About Us Zelis is modernizing the healthcare financial experience in the United States (U.S.) by providing a connected platform that bridges the gaps and aligns interests across payers, providers, and healthcare consumers. This platform serves more than 750 payers, including the top 5 health plans, BCBS insurers, regional health plans, TPAs and self-insured employers, and millions of healthcare providers and consumers in the U.S. Zelis sees across the system to identify, optimize, and solve problems holistically with technology built by healthcare experts—driving real, measurable results for clients. Why We Do What We Do In the U.S., consumers, payers, and providers face significant challenges throughout the healthcare financial journey. Zelis helps streamline the process by offering solutions that improve transparency, efficiency, and communication among all parties involved. By addressing the obstacles that patients face in accessing care, navigating the intricacies of insurance claims, and the logistical challenges healthcare providers encounter with processing payments, Zelis aims to create a more seamless and effective healthcare financial system. Zelis India plays a crucial role in this mission by supporting various initiatives that enhance the healthcare financial experience. The local team contributes to the development and implementation of innovative solutions, ensuring that technology and processes are optimized for efficiency and effectiveness. Beyond operational expertise, Zelis India cultivates a collaborative work culture, leadership development, and global exposure, creating a dynamic environment for professional growth. With hybrid work flexibility, comprehensive healthcare benefits, financial wellness programs, and cultural celebrations, we foster a holistic workplace experience. Additionally, the team plays a vital role in maintaining high standards of service delivery and contributes to Zelis’ award-winning culture. Position Overview Job Title - Data Engineer – Zelis Data Cloud Location: Hyderabad, India Department Data Platform: – ZDC, ZDI Reports To: Manager, ZDC Job Summary Data Engineer – Key Responsibilities At least 5 years of experience in designing and developing Data Pipelines & Assets. Must have experience with at least one Columnar MPP Cloud data warehouse (Snowflake/Azure Synapse/Redshift) for at least 3 years. Experience in ETL tools like Azure Data factory, Fivetran / DBT for 2 years. Experience with Git and Azure DevOps. Experience in Agile, Jira, and Confluence. Solid understanding of programming SQL objects (procedures, triggers, views, functions) in SQL Server. Experience optimizing SQL queries a plus. Working Knowledge of Azure Architecture, Data Lake. Willingness to contribute to documentation (e.g., mapping, defect logs). Qualifications Bachelor's degree in Computer Science, Statistics, or a related field. Self starter & learner Able to understand and probe for requirements Generate functional specs for code migration or ask right questions thereof Hands on programmer with a thorough understand of performance tuning techniques Handling large data volume transformations (order of 100 GBs monthly) Able to create solution / data flows to suit requirements Produce timely documentation e.g., mapping, UTR, defect / KEDB logs etc. Tech experience expected Primary: Snowflake, DBT (development & testing) Secondary: Python, ETL or any data processing tool Nice to have - Domain experience in Healthcare Experience range - 4 - 6 yrs

Posted 4 weeks ago

Apply

5.0 years

15 - 25 Lacs

Mumbai Metropolitan Region

On-site

Data Engineer – On-Site, India Industry: Enterprise Data Analytics & Digital Transformation Consulting. We architect and operationalize large-scale data platforms that power BI, AI, and advanced reporting for global clients across finance, retail, and manufacturing. Leveraging modern cloud services and proven ETL frameworks, our teams turn raw data into trusted, analytics-ready assets that accelerate business decisions. Role & Responsibilities Design, build, and optimize end-to-end ETL pipelines that ingest, cleanse, and transform high-volume datasets using SQL and ELT best practices. Create scalable data models and dimensional schemas to support reporting, dashboarding, and machine-learning use-cases. Develop and maintain batch and near-real-time workflows in Airflow or similar orchestration tools, ensuring fault tolerance and SLA compliance. Collaborate with analysts, data scientists, and product owners to translate business requirements into performant data solutions. Implement rigorous data quality checks, lineage tracking, and metadata management to guarantee trust and auditability. Tune queries, indexes, and storage partitions for cost-efficient execution across on-premise and cloud data warehouses. Skills & Qualifications Must-Have 5+ years hands-on experience as a Data Engineer or similar. Advanced SQL proficiency for complex joins, window functions, and performance tuning. Proven expertise in building ETL/ELT pipelines with tools such as Informatica, Talend, or custom Python. Solid understanding of dimensional modeling, star/snowflake schemas, and data-vault concepts. Experience with workflow orchestration (Airflow, Luigi, or equivalent) and version control (Git). Strong grasp of data quality frameworks and error-handling strategies. Preferred Exposure to cloud platforms (AWS Redshift, Azure Synapse, or Google BigQuery). Knowledge of containerization and CI/CD pipelines for data workloads. Familiarity with streaming technologies (Kafka, Kinesis) and real-time ETL patterns. Working knowledge of BI tools (Tableau, Power BI) and their data connectivity. Benefits & Culture Highlights Work with high-calibre data practitioners and cutting-edge cloud tech. Merit-driven growth path, certification sponsorships, and continuous learning stipends. Inclusive, innovation-first culture that rewards problem-solving and ownership. Skills: kafka,data warehouse,containerization,airflow,elt,luigi,error-handling strategies,git,aws redshift,talend,star schema,power bi,informatica,data vault,ci/cd,azure synapse,etl,sql,kinesis,performance tuning,data modeling,data quality frameworks,python,dimensional modeling,snowflake schema,tableau,google bigquery

Posted 4 weeks ago

Apply

4.0 years

0 Lacs

Ahmedabad, Gujarat, India

On-site

DXFactor is a US-based tech company working with customers across the globe. We are a Great place to work with certified company. We are looking for candidates for Data Engineer (4 to 6 Yrs exp) We have our presence in: US India (Ahmedabad, Bangalore) Location : Ahmedabad Website : www.DXFactor.com Designation: Data Engineer (Expertise in SnowFlake, AWS & Python) Key Responsibilities Design, develop, and maintain scalable data pipelines for batch and streaming workflows Implement robust ETL/ELT processes to extract data from various sources and load into data warehouses Build and optimize database schemas following best practices in normalization and indexing Create and maintain documentation for data flows, pipelines, and processes Collaborate with cross-functional teams to translate business requirements into technical solutions Monitor and troubleshoot data pipelines to ensure optimal performance Implement data quality checks and validation processes Build and maintain CI/CD workflows for data engineering projects Stay current with emerging technologies and recommend improvements to existing systems Requirements Bachelor's degree in Computer Science, Information Technology, or related field Minimum 4+ years of experience in data engineering roles Strong proficiency in Python programming and SQL query writing Hands-on experience with relational databases (e.g., PostgreSQL, MySQL) and NoSQL databases (e.g., MongoDB, Cassandra) Experience with data warehousing technologies (e.g., Snowflake, Redshift, BigQuery) Proven track record in building efficient and scalable data pipelines Practical knowledge of batch and streaming data processing approaches Experience implementing data validation, quality checks, and error handling mechanisms Working experience with cloud platforms, particularly AWS (S3, EMR, Glue, Lambda, Redshift) and/or Azure (Data Factory, Databricks, HDInsight) Understanding of different data architectures including data lakes, data warehouses, and data mesh Demonstrated ability to debug complex data flows and optimize underperforming pipelines Strong documentation skills and ability to communicate technical concepts effectively

Posted 4 weeks ago

Apply

18.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

About the Role This is a senior leadership position within Business Information Management Practice. The individual is responsible for the overall vision, strategy, delivery and operations of key accounts in BIM. This requires working closely with global executive team, subject matter experts, solution architects, project managers and client teams to conceptualize, build and operate Big Data Solutions. Communicate with internal management, client sponsors and senior leaders on the project status, risks, solution, etc. Responsibilities Client Delivery Leadership Role Candidate to be responsible for delivering at least $10 M + revenue using information management solution(s): Big Data, Data Warehouse, Data Lake, GEN AI, Master Data Management System, Business Intelligence & Reporting solutions, IT Architecture Consulting, Cloud Platforms (AWS/AZURE), SaaS/PaaS based solutions Practice and Team Leadership Role: Self-Driven for results - Able to take initiative and set priorities; pursue tasks tenaciously & with a need to finish. Able to overcome setbacks which may occur along the way. Customer Focus - Dedicated to meeting the expectations of internal and external clients. Problem Solving - Uses rigorous logic and methods to solve difficult problems with effective solutions. Probes all fruitful sources for answers. Is excellent at honest analysis. Looks beyond the obvious and doesn’t stop at the first answers. Learning on the Fly - Learns quickly when facing new problems. A relentless and versatile learner. Proven ability to handle multiple projects/programs while meeting deadlines and documenting progress towards those deadlines. Excellent communication skills (must be able to interface with both technical and business leaders in the organization). Leadership Skills to Coach, mentor and develop senior and middle level staff. Develop the manager layers to be leaders of the future. Be known as a Thought Leader in a specific aspect of Information Management technology spectrum or Pharma domain. Direct the training & skill enhancement of the team, in line with pipeline opportunities. Ability to lead large RFP responses, design and implement the solution for proposals and customer decks. Assist in generating order pipeline, road shows, develop Go-to-market strategy for regions & verticals. Create market facing collaterals as per requirements. Able to write white paper, blogs, technical/functional point of view. Qualifications MBA in Business Management Bachelor of Computer Science Required Skills Candidate should have 18+ years of prior experience (preferably including at least 5 yrs in Pharma Commercial domain) in delivering customer focused information management solution(s): Big Data, Data Warehouse, Data Lake, Master Data Management System, Business Intelligence & Reporting solutions, IT Architecture Consulting, Cloud Platforms (AWS/AZURE), SaaS/PaaS based solutions. Should have successfully done 4-5 end to end DW implementations using technologies: Big Data, Data Management and BI technologies such as Redshift, Hadoop, ETL tools like Informatica/Matillion/Talend, BI tools like Qlik/MSTR/Tableau, Dataiku/Knime and Cloud Offerings from AWS/Azure. Ability to lead large RFP responses, design and implement the solution for proposals and customer decks. Should have led large teams of at least 100+ resources. Good communication, client facing and leadership skills. Hands on knowledge of databases, SQL, reporting solutions like BI tools or Excel/VBA. Preferred Skills Teamwork & Leadership Motivation to Learn and Grow Ownership Cultural Fit Talent Management Capability Building / Thought Leadership About the Company Axtria is a global provider of cloud software and data analytics to the Life Sciences industry. We help Life Sciences companies transform the product commercialization journey to drive sales growth and improve healthcare outcomes for patients. We are acutely aware that our work impacts millions of patients and lead passionately to improve their lives. Since our founding in 2010, technology innovation has been our winning differentiation, and we continue to leapfrog competition with platforms that deploy Artificial Intelligence and Machine Learning. Our cloud-based platforms - Axtria DataMax™, Axtria InsightsMax™, Axtria SalesIQ™, and Axtria MarketingIQ™ - enable customers to efficiently manage data, leverage data science to deliver insights for sales and marketing planning and manage end-to-end commercial operations. With customers in over 75 countries, Axtria is one of the largest global commercial solutions providers in the Life Sciences industry. We continue to win industry recognition for growth and are featured in some of the most aspirational lists - INC 5000, Deloitte FAST 500, NJBiz FAST 50, SmartCEO Future 50, Red Herring 100, and several other growth and technology awards. Axtria is looking for exceptional talent to join our rapidly growing global team. People are our biggest perk! Our transparent and collaborative culture offers a chance to work with some of the brightest minds in the industry. Axtria Institute, our in-house university, offers the best training in the industry and an opportunity to learn in a structured environment. A customized career progression plan ensures every associate is setup for success and able to do meaningful work in a fun environment. We want our legacy to be the leaders we produce for the industry. Will you be next?

Posted 4 weeks ago

Apply

0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Job Overview: Blue Spire is hiring across multiple levels —from Database Engineers to Engineering Managers —to support our client's mission-critical Analytics Platforms . The team will be a part of centralized database engineers who are responsible for the maintenance and support of our client's most critical databases. This is a high-impact opportunity within a new team which drives technical excellence, and partners closely with global business and technology teams. Responsibilities Requires conceptual knowledge of database practices and procedures such as DDL, DML and DCL. Knowledge/ Experience with database management/administration, Redshift, Snowflake or Neo4J Requires how to use basic SQL skills including SELECT, FROM, WHERE and ORDER BY. Ability to code SQL Joins, subqueries, aggregate functions (AVG, SUM, COUNT), and use data manipulation techniques (UPDATE, DELETE). Understanding basic data relationships and schemas. Develop Basic Entity-Relationship diagrams. Conceptual understanding of cloud computing Can solves routine problems using existing procedures and standard practices. Can look up error codes and open tickets with vendors Ability to execute explains and identify poorly written queries Review data structures to ensure they adhere to database design best practices. Understanding the different cloud models (IaaS, PaaS, SaaS), service models, and deployment options (public, private, hybrid). Troubleshoot database issues, such as integrity issues, blocking/deadlocking issues, log shipping issues, connectivity issues, security issues, memory issues, disk space, etc. Understanding cloud security concepts, including data protection, access control, and compliance. Manages risks that are associated with the use of information technology. Identifies, assesses, and treats risks that might affect the confidentiality, integrity, and availability of the organization's assets. Ability to design and implement highly performing database using partitioning & indexing that meet or exceed the business requirements. Documents a complex software system design as an easily understood diagram, using text and symbols to represent the way data needs to flow. Ability to code complex SQL. Performs effective backup management and periodic databases restoration testing. General DB Cloud networking skills – VPCs, SGs, KMS keys, private links. Ability to develop stored procedures and at least one scripting language for reusable code and improved performance. Know how to import and export data into and out of databases using ETL tools, code, migration tools like DMS or scripts Knowledge of DevOps principles and tools, such as CI/CD. Attention to detail and demonstrate a customer centric approach. Solves complex problems by taking a new perspective on existing solutions; exercises judgment based on the analysis of multiple sources of information Ability to optimize queries for performance and resource efficiency Review database metrics to identify performance issues. Required Qualifications Experience with database management/administration, Redshift, Snowflake or Neo4J Working with incident, change and problem management processes and procedures. Experience maintaining and supporting large-scale critical database systems in the cloud. Experience working with AWS cloud hosted databases An understanding of one programming languages, including at least one front end framework (Angular/React/Vue), such as Python3, Java, JavaScript, Ruby, Golang, C, C++, etc. Experience with cloud computing, ETL and streaming technologies – OpenShift, DataStage, Kafka Experience with agile development methodology Strong SQL performance & tuning skills Excellent communication and client interfacing skills Experience working in the banking industry Experience working in an agile development environment Experience working in cloud environments such as AWS, Azure or Google Experience with CI/CD pipeline (Jenkins, Liquibase or equivalent)

Posted 4 weeks ago

Apply

5.0 years

17 - 25 Lacs

Chennai, Tamil Nadu, India

On-site

This role is for one of the Weekday's clients Salary range: Rs 1700000 - Rs 2500000 (ie INR 17-25 LPA) Min Experience: 5 years Location: Chennai, anywhere in tamilnadu JobType: full-time Notice Period: Immediate joiners or within 30 days preferred Qualification: Bachelor's degree in Computer Science, Information Technology, or a related field Requirements Key Responsibilities & Skills Core Competencies Automation: Expertise in automating infrastructure using tools like CDK, CloudFormation, and Terraform CI/CD Pipelines: Hands-on experience with GitHub Actions, Jenkins, or similar tools for continuous integration and deployment Monitoring & Observability: Familiar with tools like OpenTelemetry, Prometheus, and Grafana API & Load Balancing: Understanding of REST, gRPC, Protocol Buffers, API Gateway, and load balancing techniques Technical Requirements Strong foundation in Linux OS and system concepts Experience handling production issues and ensuring system reliability Proficient in at least one programming or scripting language (Python, Go, or Shell) Familiarity with Docker, microservices architecture, and cloud-native tools like Kubernetes Understanding of RDBMS/NoSQL databases such as PostgreSQL and MongoDB Additional Skills Awareness of security practices including OWASP, static code analysis, etc. Familiarity with fintech security standards (e.g., PCI-DSS, SOC2) is a plus AWS certifications are an added advantage Knowledge of AWS data services like DMS, Glue, Athena, and Redshift is a bonus Experience working in start-up environments and with distributed teams is desirable Key Skills DevOps | AWS | Terraform | CI/CD | Linux | Docker | Kubernetes | Python | Infrastructure Automation | Monitoring & Observability

Posted 4 weeks ago

Apply

3.0 years

0 Lacs

Greater Hyderabad Area

Remote

COMPANY OVERVIEW Autism affects 1 in 36 children globally. By 2025, over a million American children will still lack essential care due to a dire shortage of Applied Behavior Analysis (ABA) clinicians. Neuromnia is a Texas-based healthcare startup leveraging human-centered AI to 10x clinician productivity , making quality autism care scalable and accessible. Your work will directly impact millions of individuals with autism and their families. You'll be at the forefront of healthcare data innovation, building the data infrastructure that powers our AI-driven solutions and creates real social impact. ROLE OVERVIEW We are seeking a skilled Data/Backend Engineer to join our team remotely from the Greater Hyderabad Area . This is a 6-month contract position with strong potential for permanent placement based on performance and company growth. As part of our core AI/ML team, you'll build robust data pipelines that transform healthcare data into actionable insights, supporting our mission to democratize autism care through technology. KEY RESPONSIBILITIES Data Engineering & Pipeline Development Design, build, and optimize batch and streaming data pipelines using Python and Pandas Transform and enrich structured and semi-structured datasets from healthcare sources including EHR systems, clinical assessments, and behavioral tracking data Ensure reliability, performance, and scalability of data ingestion processes handling sensitive healthcare information Implement data quality checks and monitoring systems to maintain data integrity across all pipelines API & Backend Development Develop RESTful APIs using Flask, FastAPI, or equivalent frameworks Build robust backend services for data access, aggregation, and delivery to internal teams and external healthcare partners Create efficient data serving layers that support real-time analytics and AI model inference Implement secure data access patterns compliant with healthcare data regulations Cloud & Infrastructure Deploy and manage data infrastructure on cloud platforms (AWS preferred, Azure/GCP acceptable) Implement serverless data processing solutions for cost-effective scaling Design and maintain data warehouses and data lakes optimized for healthcare analytics Ensure HIPAA-compliant data storage and processing practices TECHNICAL REQUIREMENTS Core Data Engineering Skills 3+ years of experience in data engineering roles with Python, SQL, and data pipeline development Strong proficiency in Python data stack : Pandas, NumPy, SQLAlchemy, and data manipulation libraries Experience with ETL/ELT frameworks : Apache Airflow, Prefect, or similar orchestration tools Hands-on experience with SQL databases (PostgreSQL, MySQL) and NoSQL systems (MongoDB, Redis) Knowledge of data warehousing concepts and tools like Snowflake, BigQuery, or Redshift API & Backend Development Proven experience building RESTful APIs using Flask, FastAPI, Django REST Framework, or similar Understanding of API authentication, rate limiting, and security best practices Experience with microservices architecture and containerization (Docker, Kubernetes) Knowledge of message queues and event-driven architecture (RabbitMQ, Apache Kafka) PREFERRED QUALIFICATIONS Cloud & DevOps Experience with AWS services: S3, Lambda, RDS, Redshift, Glue, or equivalent services on Azure/GCP Proficiency in Infrastructure as Code (Terraform, CloudFormation) Hands-on experience with CI/CD pipelines (GitHub Actions, Jenkins, GitLab CI) Strong understanding of version control with Git and collaborative development workflows Healthcare & Domain Expertise Experience working with healthcare data and understanding of medical data formats (HL7, FHIR) Knowledge of healthcare compliance requirements (HIPAA, data privacy regulations) Familiarity with autism research, behavioral analysis, or special education domains Experience with remote patient monitoring or telehealth data systems

Posted 4 weeks ago

Apply

100.0 years

0 Lacs

Hyderabad, Telangana, India

Remote

About Company: Our client is a global technology company headquartered in Santa Clara, California. it focuses on helping organisations harness the power of data to drive digital transformation, enhance operational efficiency, and achieve sustainability. over 100 years of experience in operational technology (OT) and more than 60 years in IT to unlock the power of data from your business, your people and your machines. We help enterprises store, enrich, activate and monetise their data to improve their customers’ experiences, develop new revenue streams and lower their business costs. Over 80% of the Fortune 100 trust our client for data solutions. The company’s consolidated revenues for fiscal 2024 (ended March 31, 2024). approximately $57.5 billion USD., and the company has approximately 296,000 employees worldwide. It delivers digital solutions utilising Lumada in five sectors, including Mobility, Smart Life, Industry, Energy and IT, to increase our customers’ social, environmental and economic value. Job Title: Data Engineer Location: Hyderabad, Bangalore, Pune (Remote) Client: Hitachi Experience: 6-9 yrs Job Type : Contract to hire . Notice Period:- Immediate joiners Only. Experience We are seeking a highly skilled and motivated Data Engineer with 6 to 9 years of hands-on experience in Data Engineering or a related role. The ideal candidate will have expertise in modern data engineering practices, a deep understanding of AWS cloud services, and the ability to build robust data pipelines and architectures. Key Responsibilities Develop and maintain scalable data pipelines and workflows using automation and orchestration tools such as Airflow. Build and optimize data architectures and models to support analytics and reporting needs. Work extensively with AWS services such as Lambda, Glue, Athena, S3, Redshift, and EC2 for data processing and storage. Ensure data integrity, quality, and security by implementing robust ETL processes and monitoring solutions. Debug and troubleshoot data pipeline issues with strong analytical and problem-solving skills. Implement modern data practices, including data lakes and real-time streaming processing capabilities. Collaborate with cross-functional teams and adapt to rapidly changing technological landscapes. Leverage tools like GIT and CI/CD pipelines for version control and deployment automation. Required Qualifications 6-9 years of experience in Data Engineering or related fields. Strong expertise in AWS cloud services (AWS Lambda, Glue, Athena, S3, etc.). Proficiency in Python and SQL. Solid understanding of data architecture and modeling concepts. Experience with ETL tools (e.g., Pentaho, SSIS, Informatica, HVR). Knowledge of database, data warehouse, and big data technologies. Experience with monitoring and logging solutions. Preferred Skills Knowledge of AI/ML and large language models (LLMs). Experience with REST APIs and Salesforce APIs. Technologies AWS Lambda, AWS Glue, Athena, S3, Redshift, EC2 Airflow, Spark, Linux

Posted 4 weeks ago

Apply

8.0 years

0 Lacs

Gurugram, Haryana, India

Remote

Job Title: Senior Full Stack Application Developer Location: Gurgaon, WFO (US Shifts) Job Type: Full-Time Experience Level: Senior (8+ years) About the Role We are seeking a passionate Senior Full Stack Application Developer to join our dynamic and culturally diverse global team. This role is ideal for someone who lives and breathes code, thrives on solving complex problems, and has a proven track record of delivering high-quality software. If you’re ready to make a real impact and collaborate with top-tier engineers, we want to hear from you. Key Responsibilities Application Development & Innovation Build, maintain, and enhance modern full stack web applications using React , Node.js , and associated libraries (e.g., react-router, ag-grid, vite, redux). Collaborate closely with the team to bring innovative features to life and optimize user experiences. Infrastructure & Cloud Management Interact with AWS services including EC2 (Ubuntu) , S3 , and CloudFront . Perform tasks such as scripting, managing environment variables, updating packages, and handling security keys. CI/CD & DevOps Set up and maintain CI/CD pipelines (experience with GitLab is a plus; strong git fundamentals are essential). Leverage Linux knowledge to support and streamline deployment workflows. Database Development & Integration Design and manage database structures in Microsoft SQL Server , including writing complex SQL queries and understanding execution plans. Integrate data layers using ORMs such as Sequelize . Work with cloud data warehouses like Snowflake (experience with Redshift or BigQuery also valuable). Team Collaboration Participate in a remote-first, international team with clear, proactive communication. Attend daily standups at 11 AM EST and collaborate effectively across time zones. Understand both explicit instructions and the broader business goals behind your work. Required Qualifications 8+ years of professional experience in web-based development. Proven track record with open-source contributions or personal projects (GitHub portfolio encouraged). Expert in JavaScript/TypeScript , React , and Node.js . Proficiency with Express.js , Sequelize , and Microsoft SQL Server . Strong experience with Linux , AWS , and CI/CD pipelines. Experience working in agile teams with modern DevOps practices. Native-level English fluency (spoken and written). Availability to work overlapping hours with EST/PST time zones. Preferred Qualifications Experience with Snowflake , Redshift , or BigQuery . Familiarity with performance optimization and debugging in production environments. Prior experience working in fully remote, multicultural teams. Our Tech Stack Backend: Node.js, Express, Microsoft SQL Server, Sequelize Frontend: React, Redux DevOps & Cloud: AWS (EC2, S3, CloudFront, Lambda), GitLab CI/CD, Ubuntu Linux

Posted 4 weeks ago

Apply

7.5 years

0 Lacs

Gurugram, Haryana, India

On-site

Project Role : Application Lead Project Role Description : Lead the effort to design, build and configure applications, acting as the primary point of contact. Must have skills : Data Engineering Good to have skills : Microsoft SQL Server, Python (Programming Language), Snowflake Data Warehouse Minimum 7.5 Year(s) Of Experience Is Required Educational Qualification : 15 years full time education Summary: As a Senior Analyst, Data Engineering, you will be part of the Data and Analytics team, responsible for developing and delivering high-quality data assets and managing data domains for Personal Banking customers and colleagues. You will bring expertise in data handling, curation, and conformity, and support the design and development of data solutions that drive business value. You will work in an agile environment to build scalable and reliable data pipelines and platforms within a complex enterprise. Roles & Responsibilities: Hands-on development experience in Data Warehousing and/or Software Development. Utilize tools and best practices to build, verify, and deploy data solutions efficiently. Perform data integration and sourcing activities across various platforms. Develop data assets to support optimized analysis for customer and regulatory outcomes. Provide ongoing support for data platforms, including problem and incident management. Collaborate in Agile software development environments using tools like GitHub, Confluence, and Rally. Support continuous improvement and innovation in data engineering practices. Professional & Technical Skills: Must To Have Skills: Experience with cloud technologies, especially AWS (S3, Redshift, Airflow). Proficiency in DevOps and DataOps tools such as Jenkins, Git, and Erwin. Advanced skills in SQL and Python. Working knowledge of UNIX, Spark, and Databricks. Additional Information: Position: Senior Analyst, Data Engineering Reports to: Manager, Data Engineering Division: Personal Bank Group: 3 Industry/Domain Skills: Experience in Retail Banking, Business Banking, or Wealth Management preferred

Posted 4 weeks ago

Apply

2.0 - 4.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

hackajob is collaborating with Comcast to connect them with exceptional tech professionals for this role. Comcast brings together the best in media and technology. We drive innovation to create the world's best entertainment and online experiences. As a Fortune 50 leader, we set the pace in a variety of innovative and fascinating businesses and create career opportunities across a wide range of locations and disciplines. We are at the forefront of change and move at an amazing pace, thanks to our remarkable people, who bring cutting-edge products and services to life for millions of customers every day. If you share in our passion for teamwork, our vision to revolutionize industries and our goal to lead the future in media and technology, we want you to fast-forward your career at Comcast. Job Summary Responsible for working cross-functionally to collect data and develop models to determine trends utilizing a variety of data sources. Retrieves, analyzes and summarizes business, operations, employee, customer and/or economic data in order to develop business intelligence, optimize effectiveness, predict business outcomes and decision-making purposes. Involved with numerous key business decisions by conducting the analyses that inform our business strategy. This may include: impact measurement of new products or features via normalization techniques, optimization of business processes through robust A/B testing, clustering or segmentation of customers to identify opportunities of differentiated treatment, deep dive analyses to understand drivers of key business trends, identification of customer sentiment drivers through natural language processing (NLP) of verbatim responses to Net Promotor System (NPS) surveys and development of frameworks to drive upsell strategy for existing customers by balancing business priorities with customer activity. Works with moderate guidance in own area of knowledge. Job Description 2-4 years of professional experience in software or data engineering roles. Hands-on experience with Power BI, Power BI Desktop, Power Apps, and Power Automate. Proficiency with Tableau and SharePoint. Familiarity with Amazon Redshift and SAP integration and data extraction. Strong analytical, troubleshooting, and communication skills. Comcast is proud to be an equal opportunity workplace. We will consider all qualified applicants for employment without regard to race, color, religion, age, sex, sexual orientation, gender identity, national origin, disability, veteran status, genetic information, or any other basis protected by applicable law. Base pay is one part of the Total Rewards that Comcast provides to compensate and recognize employees for their work. Most sales positions are eligible for a Commission under the terms of an applicable plan, while most non-sales positions are eligible for a Bonus. Additionally, Comcast provides best-in-class Benefits to eligible employees. We believe that benefits should connect you to the support you need when it matters most, and should help you care for those who matter most. That’s why we provide an array of options, expert guidance and always-on tools, that are personalized to meet the needs of your reality - to help support you physically, financially and emotionally through the big milestones and in your everyday life. Education Bachelor's Degree While possessing the stated degree is preferred, Comcast also may consider applicants who hold some combination of coursework and experience, or who have extensive related professional experience. Relevant Work Experience 2-5 Years

Posted 4 weeks ago

Apply

4.0 - 8.0 years

10 - 20 Lacs

India

On-site

Hi, Job Description Problem Formulation: Identifies possible options to address the business problems and must possess good understanding of dimension modelling Must have worked on at least one end to end project using any Cloud Datawarehouse (Azure Synapses, AWS Redshift, Google Big query) Good to have an understand of POWER BI and integration with any Cloud services like Azure or GCP Experience Of Working With SQL Server, SSIS(Preferred) Applied Business Acumen: Supports the development of business cases and recommendations. Owns delivery of project activity and tasks assigned by others. Supports process updates and changes. Solves business issues. Data Transformation/Integration/Optimization The ETL developer is responsible for designing and creating the Data warehouse and all related data extraction, transformation and load of data function in the company The developer should provide the oversight and planning of data models, database structural design and deployment and work closely with the data architect and Business analyst Duties include working in a cross functional software development teams (Business analyst, Testers, Developers) following agile ceremonies and development practices. The developer plays a key role in contributing to the design, evaluation, selection, implementation and support of databases solution. Development and Testing: Develops codes for the required solution by determining the appropriate approach and leveraging business, technical, and data requirements. Creates test cases to review and validate the proposed solution design. Work on POCs and deploy the software to production servers. Good To Have (Preferred Skills) Minimum 4-8 Years of experience in Data warehouse design and development for large scale application Minimum 3 years of experience with star schema, dimensional modelling and extract transform load (ETL) Design and development Expertise working with various databases (SQL Server, Oracle) Experience developing Packages, Procedures, Views and triggers Nice to have Big data technologies The individual must have good written and oral communication skills. Nice to have SSIS Education And Experience Minimum 4-8 years of software development experience Bachelor's and/or Master’s degree in computer science Please revert back with below details. Total Experience Relevant Experience: Current CTC Expected CTC: Any offers: Y/N Qualification Notice Period: Dob Present Company Name: Designation Domain Reason For Job Change Current Location: Skills:- ETL, Amazon Web Services (AWS), Google Cloud Platform (GCP), SSIS and Cloud Datawarehouse

Posted 1 month ago

Apply

9.0 - 14.0 years

10 - 20 Lacs

Hyderabad, Bengaluru, Mumbai (All Areas)

Hybrid

Job Summary: (Full-Time Opportunity with MNC Client) We are looking for a skilled Data Engineer to join our team and build scalable data pipelines using Python , PySpark , and AWS services like S3 and Redshift . The ideal candidate will be responsible for collecting data from multiple sources, transforming and normalizing it, and ensuring its availability for downstream analytics and visualization platforms. Key Responsibilities: • Design and develop ETL/ELT pipelines using PySpark and Python • Ingest and process data from diverse sources, including APIs, files, and databases • Store, manage, and organize data efficiently in AWS S3 • Transform and load processed data into AWS Redshift for reporting and analytics • Implement data validation, quality checks, and normalization standards • Collaborate with analytics and BI teams to ensure data is structured for dashboards and visualization tools • Monitor, troubleshoot, and optimize pipeline performance Required Skills & Experience: • Strong proficiency in Python and PySpark • Hands-on experience with AWS services : S3, Redshift, Lambda, Glue (preferred) • Solid understanding of data modeling , ETL processes , and SQL • Experience working with large-scale datasets and distributed data processing • Familiarity with data quality and governance best practices

Posted 1 month ago

Apply

2.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

The Company JOB DESCRIPTION Oracle is the world’s leading provider of business software. With a presence in over 175 countries, we are one of the biggest technology companies on the planet. We're using innovative emerging technologies to tackle real-world problems today. From advancing energy efficiency to re-imagining online commerce, the work we do is not only transforming the world of business—it is helping advance governments, power nonprofits, and giving billions of people the tools they need to outpace change. For more information about Oracle (NYSE:ORCL), visit us at oracle.com . Oracle’s commitment to R&D is a driving factor in the development of technologies that have kept Oracle at the forefront of the computer industry. If you are passionate about advanced development and working on the next-generation large-scale distributed systems for the most popular open source database in the world, which is optimized for the cloud providing the best performance, we would like to talk with you. What you will do: The HeatWave service team is responsible for the massively parallel, high performance, in-memory query accelerator. HeatWave is 6.5X faster than Amazon Redshift at half the cost, 7X faster than Snowflake at one-fifth the cost, and 1400X faster than Amazon Aurora at half the cost. It is the only cloud-native database service that combines transactions, analytics, and machine learning services into HeatWave, delivering real-time, secure analytics without the complexity, latency, and cost of ETL duplication.. This eliminates the need for complex, time-consuming, and expensive data movement and integration with a separate analytics database. The new MySQL Autopilot uses advanced machine-learning techniques to automate HeatWave, which make it easier to use and further improves performance and scalability. Join us to help further develop this amazing technology. This cutting edge technology serves critical business needs, which is changing the way data transactions function, all over the world. You will make a technical impact on the world with the work you do. Join a fun and flexible workplace where you’ll enhance your skills and build a solid professional foundation. As a Cloud Operations engineer for Oracle's Heatwave service team you will contribute to an exciting team working on one of the hottest cloud services. You will use your skills to learn how to constantly deliver and improve on these tremendous cloud services. Operations work will include troubleshooting production issues and handling requests for upgrades, patches or modifications. When not working on operations you will be working on software engineering tasks such as review of incidents to drive improvement of services, tools or runbooks to increase our reliability, scalability and reduce operational overhead through automation, training, documentation, service enhancement, or process. This position has the opportunity to leverage and learn the ins and outs of current cloud service architecture, deployment, monitoring and operational technologies. There are many useful and desirable skills which will be acquired if not already present. See below for the many cool and current technologies in play. The ideal candidate has some of the many skills, but key is the motivation and ability to learn quickly as well as a passion for an excellent customer experience. Learn more at https://www.mysql.com/products/heatwave/ Career Level - IC2 Responsibilities Engineer will: Improve monitoring, notifications, configuration of Heatwave services. Perform proactive service checks and monitor/triage and address incoming system/application alerts, email and phone calls to ensure appropriate priority and meeting service SLA response time. Triage and troubleshoot service impacting events from multiple signals including phone, email, service telemetry and alerting. Participate in activities for services such as upgrades and patching. Identify and work with engineering to implement opportunities for automation, signal noise reduction, recurring issues and other actions to reduce time to mitigate service impacting events and increase the productivity of cloud operations and development resources. Coordinate, document and track critical incidents ensuring rapid and complete issue resolution and an appropriate closed loop to customers and other key stakeholders. Contribute to a healthy, supportive and inclusive team culture. Provide feedback to development teams about operations administration dashboards functionality and UIs. Up-skill by learning new features delivered for the service in accordance with the product roadmap. Improve the availability, scalability, latency, ease of use, and efficiency of service control plans and operational tooling. Participate in service capacity planning and demand forecasting, software performance analysis and system tuning. Support secondary Heatwave on AWS cloud service as per business requirement. Potentially participate in regular rotations as a central part of the 24x7 Operations team, Includes rotational work on weekends, Public Holidays, US East timezone shifts. Need to be reliable in terms of working scheduled hours. Need to be motivated quick learners. Desired Skills Include AWS-specific skills which would be a plus but are not strictly required: Familiarity with AWS services (e.g.: Lambda service, Step functions, DynamoDB, AWS Session manager, CloudWatch, etc.). Familiarity with OCI or equivalent Cloud services (e.g.: IAM, Compute, Load Balancer, Object Storage, Health Monitor). General skills for working in this operational role: Basic understanding of serverless cloud architecture. Familiarity with MySQL database, SQL query interface and general database concepts. Experience with Python programming, bash scripting & Git. Basic Linux system administration knowledge and experience and familiarity with Linux troubleshooting and internals. Familiarity with Networking concepts, DevOps model. Work productively in a fast-paced, team-oriented Agile environment. Contribute to operational activities such as writing runbooks, troubleshooting, operations automation, and instrumentation for metrics and events. Good technical writing and communication skills. Engineers will need to be able to clearly write descriptions of operational issues and corrective actions for incidents. Experience with Agile methodology (Scrum or Kanban). Very strong analytical skills to identify problem root causes. Experience in collaborating with cross-functional teams like Development, QA, Product management, etc. Systematic problem-solving approach, combined with a strong sense of ownership and drive in resolving operations issues. Experience working under pressure to mitigate customer issues affecting service reliability, data integrity, and overall customer experience. Monitoring, management, analysis and troubleshooting of large-scale, distributed systems. BS/BE or MS/Mtech degree in Computer Science, Electrical/Hardware Engineering or related field. 2+ years experience delivering and operating large scale, highly available distributed systems. 2+ years of work experience as a Software, Site reliability, Operations or Customer Support engineer. Qualifications Career Level - IC2 About Us As a world leader in cloud solutions, Oracle uses tomorrow’s technology to tackle today’s challenges. We’ve partnered with industry-leaders in almost every sector—and continue to thrive after 40+ years of change by operating with integrity. We know that true innovation starts when everyone is empowered to contribute. That’s why we’re committed to growing an inclusive workforce that promotes opportunities for all. Oracle careers open the door to global opportunities where work-life balance flourishes. We offer competitive benefits based on parity and consistency and support our people with flexible medical, life insurance, and retirement options. We also encourage employees to give back to their communities through our volunteer programs. We’re committed to including people with disabilities at all stages of the employment process. If you require accessibility assistance or accommodation for a disability at any point, let us know by emailing accommodation-request_mb@oracle.com or by calling +1 888 404 2494 in the United States. Oracle is an Equal Employment Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, sexual orientation, gender identity, disability and protected veterans’ status, or any other characteristic protected by law. Oracle will consider for employment qualified applicants with arrest and conviction records pursuant to applicable law.

Posted 1 month ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies