Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
5.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Project Role : Application Lead Project Role Description : Lead the effort to design, build and configure applications, acting as the primary point of contact. Must have skills : Data Analytics Good to have skills : Microsoft SQL Server, Python (Programming Language), AWS Redshift Minimum 5 Year(s) Of Experience Is Required Educational Qualification : 15 years full time education Summary The purpose of the Data Engineering function within the Data and Analytics team is to develop and deliver great data assets and data domain management for our Personal Banking customers and colleagues seamlessly and reliably every time. As a Senior Data Engineer, you will bring expertise on data handling, curation and conformity capabilities to the team; support the design and development of solutions which assist analysis of data to drive tangible business benefit; and assist colleagues in developing solutions that will enable the capture and curation of data for analysis, analytical and/or reporting purposes. The Senior Data Engineer must be experience working as part of an agile team to develop a solution in a complex enterprise. Roles & Responsibilities Hands on development experience in Data Warehousing, and or Software Development Experience utilising tools and practices to build, verify and deploy solutions in the most efficient ways Experience in Data Integration and Data Sourcing activities Experience developing data assets to support optimised analysis for customer and regulatory outcomes. Provide ongoing support for platforms as required e.g. problem and incident management Experience in Agile software development including Github, Confluence, Rally Professional & Technical Skills Experience with cloud technologies, especially AWS (S3, Redshift, Airflow), DevOps and DataOps tools (Jenkins, Git, Erwin) Advanced SQL and Python user Knowledge of UNIX, Spark and Databricks Additional Information Position: Senior Analyst, Data Engineering Reports to: Manager, Data Engineering Division: Personal Bank Group: 3 Industry/domain skills: Some expertise in Retail Banking, Business Banking and or Wealth Management preferred Job Requirements: A degree in Computer Science, or Information Technology preferred Office Location: NAB Innovation Centre India DLF Downtown, 3rd Floor, Tower 3 DLF City Phase 3 Road, Sector 25A Gurugram, Haryana 122002 15 years full time education
Posted 1 week ago
0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Area(s) of responsibility About The Role We are seeking a skilled Dataiku Developer to join our data science workbench program team. The ideal candidate will design, develop, and deploy scalable data pipelines and advanced analytics and AI/ML solutions using the Dataiku platform. You will work closely with data scientists, engineers, and business stakeholders to enable data-driven decision-making. Key Responsibilities Design, develop, extend and maintain end-to-end data workflows and pipelines in Dataiku DSS. Collaborate with data scientists and analysts to operationalize machine learning models. Leverage Generative AI models and tools within Dataiku to build advanced AI-powered applications and analytics solutions. Integrate Dataiku with various data sources (databases, cloud storage, APIs). Develop and optimize SQL queries and Python/R scripts for data extraction and transformation across relational and NoSQL databases Work extensively with cloud data warehouses like Amazon Redshift and/or Snowflake for data ingestion, transformation, and analytics. Implement automation and scheduling of data workflows. Monitor and troubleshoot data pipelines to ensure data quality and reliability. Document technical solutions and best practices for data processing and analytics.
Posted 1 week ago
5.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Who We Are Zinnia is the leading technology platform for accelerating life and annuities growth. With innovative enterprise solutions and data insights, Zinnia simplifies the experience of buying, selling, and administering insurance products. All of which enables more people to protect their financial futures. Our success is driven by a commitment to three core values: be bold, team up, deliver value – and that we do. Zinnia has over $180 billion in assets under administration, serves 100+ carrier clients, 2500 distributors and partners, and over 2 million policyholders. Who You Are We are seeking a highly motivated Senior Data Analyst with strong technical expertise, business acumen, and strategic problem-solving abilities . In this role, you will independently own and drive analytics initiatives within the Operations team , translating data into actionable insights that improve efficiency, decision-making, and key business KPIs. You will work closely with stakeholders across Operations, Product, Data Engineering, and Business Strategy to identify opportunities for process optimization, automate decision-making, and create scalable analytics frameworks. This is a high-impact individual contributor role that requires both deep analytical skills and the ability to influence business strategy through data. What You’ll Do Drive analytics strategy: Independently own and drive key analytics initiatives in Operations, proactively identifying areas for efficiency improvements and cost optimization. Advanced analytics & measurement: Move beyond basic dashboards and leverage inferential modeling, causal analysis, and experimental design to generate actionable insights. Experimentation & testing: Design and implement A/B tests to measure the impact of operational improvements, optimizing key processes such as fraud detection, customer interactions, and compliance. Operational KPIs & business impact: Develop frameworks to measure Turnaround Time (TAT), Cost Per Transaction, SLA adherence, and other key operational metrics, ensuring data-driven decision-making. Data storytelling & visualization: Translate complex data insights into clear, actionable recommendations using visual storytelling techniques in Power BI and other visualization tools. Cross-functional collaboration: Work closely with stakeholders across Operations, Data Engineering, and Product to align analytics initiatives with business needs. Scalability & automation: Partner with Data Engineering to enhance data pipelines, data models, and automation efforts that improve efficiency and reduce manual work. Thought leadership & best practices: Drive data analysis best practices and mentor junior analysts, fostering a culture of analytical rigor and excellence. What You’ll Need 5+ years of experience in data analytics, with a focus on Operations, Business Strategy, or Process Optimization. Expertise in SQL, Python and with a strong ability to work with relational cloud databases (Redshift, BigQuery, Snowflake) and unstructured datasets. Experience designing A/B tests and experimentation frameworks to drive operational improvements. Strong statistical knowledge, including regression analysis, time-series forecasting, and causal inference modeling. Experience in operations analytics such as workforce efficiency, process optimization, risk modeling, and compliance analytics. Hands-on experience with data visualization tools (Power BI, Tableau, Looker) and the ability to present insights effectively to leadership. Ability to work independently, take ownership of projects, and influence business decisions through data-driven recommendations. Strong problem-solving skills and a proactive mindset to identify business opportunities using data Bonus Points If You Have Experience with ML/AI applications in operational efficiency (e.g., anomaly detection, predictive modeling, workforce automation). Familiarity with event-tracking frameworks and behavioral analytics. Strong data storytelling skills—can translate complex data into concise, compelling narratives. Prior experience in a fast-paced, high-growth environment with a focus on scaling data analytics. WHAT’S IN IT FOR YOU? At Zinnia, you collaborate with smart, creative professionals who are dedicated to delivering cutting-edge technologies, deeper data insights, and enhanced services to transform how insurance is done. Visit our website at www.zinnia.com for more information. Apply by completing the online application on the careers section of our website. We are an Equal Opportunity employer committed to a diverse workforce. We do not discriminate based on race, religion, color, national origin, gender, sexual orientation, age, marital status, veteran status, or disability.
Posted 1 week ago
5.0 - 10.0 years
12 - 20 Lacs
Noida, Gurugram, Delhi / NCR
Hybrid
Responsibilities :- Build and manage data infrastructure on AWS , including S3, Glue, Lambda, Open Search, Athena, and CloudWatch using IaaC tool like Terraform Design and implement scalable ETL pipelines with integrated validation and monitoring. Set up data quality frameworks using tools like Great Expectations , integrated with PostgreSQL or AWS Glue jobs. Implement automated validation checks at key points in the data flow: post-ingest, post-transform, and pre-load. Build centralized logging and alerting pipelines (e.g., using CloudWatch Logs, Fluent bit ,SNS, File bit ,Logstash , or third-party tools). Define CI/CD processes for deploying and testing data pipelines (e.g., using Jenkins, GitHub Actions) Collaborate with developers and data engineers to enforce schema versioning, rollback strategies, and data contract enforcement. Preferred candidate profile 5+ years of experience in DataOps, DevOps, or data infrastructure roles. Proven experience with infrastructure-as-code (e.g., Terraform, CloudFormation). Proven experience with real-time data streaming platforms (e.g., Kinesis, Kafka). Proven experience building production-grade data pipelines and monitoring systems in AWS . Hands-on experience with tools like AWS Glue , S3 , Lambda , Athena , and CloudWatch . Strong knowledge of Python and scripting for automation and orchestration. Familiarity with data validation frameworks such as Great Expectations, Deequ, or dbt tests. Experience with SQL-based data systems (e.g., PostgreSQL). Understanding of security, IAM, and compliance best practices in cloud data environments.
Posted 1 week ago
5.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Job Description Analytics Engineer We are seeking a talented, motivated and self-driven professional to join the HH Digital, Data & Analytics (HHDDA) organization and play an active role in Human Health transformation journey to become the premier “Data First” commercial biopharma organization. As a Analytics Engineer, you will be part of the HHDDA Commercial Data Solutions team, providing technical/data expertise development of analytical data products to enable data science & analytics use cases. In this role, you will create and maintain data assets/domains used in the commercial/marketing analytics space – to develop best-in-class data pipelines and products, working closely with data product owners to translate data product requirements and user stories into development activities throughout all phases of design, planning, execution, testing, deployment and delivery. Your specific responsibilities will include Hands-on development of last-mile data products using the most up-to-date technologies and software / data / DevOps engineering practices Enable data science & analytics teams to drive data modeling and feature engineering activities aligned with business questions and utilizing datasets in an optimal way Develop deep domain expertise and business acumen to ensure that all specificalities and pitfalls of data sources are accounted for Build data products based on automated data models, aligned with use case requirements, and advise data scientists, analysts and visualization developers on how to use these data models Develop analytical data products for reusability, governance and compliance by design Align with organization strategy and implement semantic layer for analytics data products Support data stewards and other engineers in maintaining data catalogs, data quality measures and governance frameworks Education B.Tech / B.S., M.Tech / M.S. or PhD in Engineering, Computer Science, Engineering, Pharmaceuticals, Healthcare, Data Science, Business, or related field Required Experience 5+ years of relevant work experience in the pharmaceutical/life sciences industry, with demonstrated hands-on experience in analyzing, modeling and extracting insights from commercial/marketing analytics datasets (specifically, real-world datasets) High proficiency in SQL, Python and AWS Good understanding and comprehension of the requirements provided by Data Product Owner and Lead Analytics Engineer Experience creating / adopting data models to meet requirements from Marketing, Data Science, Visualization stakeholders Experience with including feature engineering Experience with cloud-based (AWS / GCP / Azure) data management platforms and typical storage/compute services (Databricks, Snowflake, Redshift, etc.) Experience with modern data stack tools such as Matillion, Starburst, ThoughtSpot and low-code tools (e.g. Dataiku) Excellent interpersonal and communication skills, with the ability to quickly establish productive working relationships with a variety of stakeholders Experience in analytics use cases of pharmaceutical products and vaccines Experience in market analytics and related use cases Preferred Experience Experience in analytics use cases focused on informing marketing strategies and commercial execution of pharmaceutical products and vaccines Experience with Agile ways of working, leading or working as part of scrum teams Certifications in AWS and/or modern data technologies Knowledge of the commercial/marketing analytics data landscape and key data sources/vendors Experience in building data models for data science and visualization/reporting products, in collaboration with data scientists, report developers and business stakeholders Experience with data visualization technologies (e.g, PowerBI) We are proud to be a company that embraces the value of bringing diverse, talented, and committed people together. The fastest way to breakthrough innovation is when diverse ideas come together in an inclusive environment. We encourage our colleagues to respectfully challenge one another’s thinking and approach problems collectively. We are an equal opportunity employer, committed to fostering an inclusive and diverse workplace. Current Employees apply HERE Current Contingent Workers apply HERE Search Firm Representatives Please Read Carefully Merck & Co., Inc., Rahway, NJ, USA, also known as Merck Sharp & Dohme LLC, Rahway, NJ, USA, does not accept unsolicited assistance from search firms for employment opportunities. All CVs / resumes submitted by search firms to any employee at our company without a valid written search agreement in place for this position will be deemed the sole property of our company. No fee will be paid in the event a candidate is hired by our company as a result of an agency referral where no pre-existing agreement is in place. Where agency agreements are in place, introductions are position specific. Please, no phone calls or emails. Employee Status Regular Relocation VISA Sponsorship Travel Requirements Flexible Work Arrangements Hybrid Shift Valid Driving License Hazardous Material(s) Required Skills Business Intelligence (BI), Data Management, Data Modeling, Data Visualization, Measurement Analysis, Stakeholder Relationship Management, Waterfall Model Preferred Skills Job Posting End Date 08/31/2025 A job posting is effective until 11 59 59PM on the day BEFORE the listed job posting end date. Please ensure you apply to a job posting no later than the day BEFORE the job posting end date. Requisition ID R335388
Posted 1 week ago
0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Job Summary We are looking for a skilled AWS Data Engineer with strong experience in building and managing cloud-based ETL pipelines using AWS Glue, Python/PySpark, and Athena, along with data warehousing expertise in Amazon Redshift. The ideal candidate will be responsible for designing, developing, and maintaining scalable data solutions in a cloud-native environment. Design and implement ETL workflows using AWS Glue, Python, and PySpark. Develop and optimize queries using Amazon Athena and Redshift. Build scalable data pipelines to ingest, transform, and load data from various sources. Ensure data quality, integrity, and security across AWS services. Collaborate with data analysts, data scientists, and business stakeholders to deliver data solutions. Monitor and troubleshoot ETL jobs and cloud infrastructure performance. Automate data workflows and integrate with CI/CD pipelines. Required Skills & Qualifications Hands-on experience with AWS Glue, Athena, and Redshift. Strong programming skills in Python and PySpark. Experience with ETL design, implementation, and optimization. Familiarity with S3, Lambda, CloudWatch, and other AWS services. Understanding of data warehousing concepts and performance tuning in Redshift. Experience with schema design, partitioning, and query optimization in Athena. Proficiency in version control (Git) and agile development practices.
Posted 1 week ago
3.0 years
0 Lacs
Coimbatore, Tamil Nadu, India
On-site
Overview Overview Annalect is currently seeking a Senior Data Engineer to join our Technology team. In this role you will build Annalect products which sit atop cloud-based data infrastructure. We are looking for people who have a shared passion for technology, design & development, data, and fusing these disciplines together to build cool things. In this role, you will work on one or more software and data products in the Annalect Engineering Team. You will participate in technical architecture, design and development of software products as well as research and evaluation of new technical solutions Responsibilities Designing, building, testing, and deploying data transfers across various cloud environments (Azure, GCP, AWS, Snowflake, etc). Developing data pipelines, monitoring, maintaining, and tuning. Write at-scale data transformations in SQL and Python. Perform code reviews and provide leadership and guidance to junior developers. Qualifications Curiosity in learning the business requirements that are driving the engineering requirements. Interest in new technologies and eagerness to bring those technologies and out of the box ideas to the team. 3+ years of SQL experience. 3+ years of professional Python experience. 3+ years of professional Linux experience. Preferred Familiarity With Snowflake, AWS, GCP, Azure Cloud Environments. Intellectual curiosity and drive; self-starters will thrive in this position. Passion for Technology: Excitement for new technology, bleeding edge applications, and a positive attitude towards solving real world challenges. Additional Skills BS BS, MS or PhD in Computer Science, Engineering, or equivalent real-world experience. Experience with big data and/or infrastructure. Bonus for having experience in setting up Petabytes of data so they can be easily accessed. Understanding of data organization, ie partitioning, clustering, file sizes, file formats. Experience working with classical relational databases (Postgres, Mysql, MSSQL). Experience with Hadoop, Hive, Spark, Redshift, or other data processing tools (Lots of time will be spent building and optimizing transformations) Proven ability to independently execute projects from concept to implementation to launch and to maintain a live product. Perks of working at Annalect We have an incredibly fun, collaborative, and friendly environment, and often host social and learning activities such as game night, speaker series, and so much more! Halloween is a special day on our calendar since it is our Founding Day - we go all out with decorations, costumes, and prizes! Generous vacation policy. Paid time off (PTO) includes vacation days, personal days, and a Summer Friday program. Extended time off around the holiday season. Our office is closed between Xmas and New Year to encourage our hardworking employees to rest, recharge and celebrate the season with family and friends. As part of Omnicom, we have the backing and resources of a global billion-dollar company, but also have the flexibility and pace of a “startup” - we move fast, break things, and innovate. Work with modern stack and environment to keep on learning and improving helping to experiment and shape latest technologies
Posted 1 week ago
4.0 - 9.0 years
15 - 25 Lacs
Hyderabad
Work from Office
python, experienced in performing ETL andData Engineering concepts (PySpark, NumPy, Pandas, AWS Glue and Airflow)SQL exclusively Oracle hands-on work experience SQL profilers / Query Analyzers AWS cloud related (S3, RDS, RedShift) ETLPython
Posted 1 week ago
0 years
0 Lacs
Andhra Pradesh, India
On-site
Job Summary We are looking for a skilled AWS Data Engineer with strong experience in building and managing cloud-based ETL pipelines using AWS Glue, Python/PySpark, and Athena, along with data warehousing expertise in Amazon Redshift. The ideal candidate will be responsible for designing, developing, and maintaining scalable data solutions in a cloud-native environment. Design and implement ETL workflows using AWS Glue, Python, and PySpark. Develop and optimize queries using Amazon Athena and Redshift. Build scalable data pipelines to ingest, transform, and load data from various sources. Ensure data quality, integrity, and security across AWS services. Collaborate with data analysts, data scientists, and business stakeholders to deliver data solutions. Monitor and troubleshoot ETL jobs and cloud infrastructure performance. Automate data workflows and integrate with CI/CD pipelines. Required Skills & Qualifications Hands-on experience with AWS Glue, Athena, and Redshift. Strong programming skills in Python and PySpark. Experience with ETL design, implementation, and optimization. Familiarity with S3, Lambda, CloudWatch, and other AWS services. Understanding of data warehousing concepts and performance tuning in Redshift. Experience with schema design, partitioning, and query optimization in Athena. Proficiency in version control (Git) and agile development practices.
Posted 1 week ago
9.0 years
0 Lacs
Gurugram, Haryana, India
Remote
Job Description This is a remote position. Job Description We are seeking a highly experienced and innovative Senior Data Engineer with a strong background in hybrid cloud data integration, pipeline orchestration, and AI-driven data modeling. This role is responsible for designing, building, and optimizing robust, scalable, and production-ready data pipelines across both AWS and Azure platforms, supporting modern data architectures such as CEDM and Data Vault 2.0. Responsibilities Design and develop hybrid ETL/ELT pipelines using AWS Glue and Azure Data Factory (ADF). Process files from AWS S3 and Azure Data Lake Gen2, including schema validation and data profiling. Implement event-based orchestration using AWS Step Functions and Apache Airflow (Astronomer). Develop and maintain bronze → silver → gold data layers using DBT or Coalesce. Create scalable ingestion workflows using Airbyte, AWS Transfer Family, and Rivery. Integrate with metadata and lineage tools like Unity Catalog and OpenMetadata. Build reusable components for schema enforcement, EDA, and alerting (e.g., MS Teams). Work closely with QA teams to integrate test automation and ensure data quality. Collaborate with cross-functional teams including data scientists and business stakeholders to align solutions with AI/ML use cases. Document architectures, pipelines, and workflows for internal stakeholders. Requirements Essential Skills: Job Experience with cloud platforms: AWS (Glue, Step Functions, Lambda, S3, CloudWatch, SNS, Transfer Family) and Azure (ADF, ADLS Gen2, Azure Functions,Event Grid). Skilled in transformation and ELT tools: Databricks (PySpark), DBT, Coalesce, and Python. Proficient in data ingestion using Airbyte, Rivery, SFTP/Excel files, and SQL Server extracts. Strong understanding of data modeling techniques including CEDM, Data Vault 2.0, and Dimensional Modeling. Hands-on experience with orchestration tools such as AWS Step Functions, Airflow (Astronomer), and ADF Triggers. Expertise in monitoring and logging with CloudWatch, AWS Glue Metrics, MS Teams Alerts, and Azure Data Explorer (ADX). Familiar with data governance and lineage tools: Unity Catalog, OpenMetadata, and schema drift detection. Proficient in version control and CI/CD using GitHub, Azure DevOps, CloudFormation, Terraform, and ARM templates. Experienced in data validation and exploratory data analysis with pandas profiling, AWS Glue Data Quality, and Great Expectations. Personal Excellent communication and interpersonal skills, with the ability to engage with teams. Strong problem-solving, decision-making, and conflict-resolution abilities. Proven ability to work independently and lead cross-functional teams. Ability to work in a fast-paced, dynamic environment and handle sensitive issues with discretion and professionalism. Ability to maintain confidentiality and handle sensitive information with attention to detail with discretion. The candidate must have strong work ethics and trustworthiness Must be highly collaborative and team oriented with commitment to excellence. Preferred Skills Job Proficiency in SQL and at least one programming language (e.g., Python, Scala). Experience with cloud data platforms (e.g., AWS, Azure, GCP) and their data and AI services. Knowledge of ETL tools and frameworks (e.g., Apache NiFi, Talend, Informatica). Deep understanding of AI/Generative AI concepts and frameworks (e.g., TensorFlow, PyTorch, Hugging Face, OpenAI APIs). Experience with data modeling, data structures, and database design. Proficiency with data warehousing solutions (e.g., Redshift, BigQuery, Snowflake). Hands-on experience with big data technologies (e.g., Hadoop, Spark, Kafka). Personal Demonstrate proactive thinking Should have strong interpersonal relations, expert business acumen and mentoring skills Have the ability to work under stringent deadlines and demanding client conditions Ability to work under pressure to achieve the multiple daily deadlines for client deliverables with a mature approach Other Relevant Information Bachelor’s in Engineering with specialization in Computer Science or Artificial Intelligence or Information Technology or a related field. 9+ years of experience in data engineering and data architecture. LeewayHertz is an equal opportunity employer and does not discriminate based on race, color, religion, sex, age, disability, national origin, sexual orientation, gender identity, or any other protected status. We encourage a diverse range of applicants. check(event) ; career-website-detail-template-2 => apply(record.id,meta)" mousedown="lyte-button => check(event)" final-style="background-color:#6875E2;border-color:#6875E2;color:white;" final-class="lyte-button lyteBackgroundColorBtn lyteSuccess" lyte-rendered="">
Posted 1 week ago
4.0 - 12.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Job Description – Senior Data Engineer We at Pine Labs are looking for those who share our core belief - “Every Day is Game day”. We bring our best selves to work each day to realize our mission of enriching the world through the power of digital commerce and financial services. Role Purpose We are looking for skilled Senior Data Engineers with 4-12 years of experience to join our growing team. You will design, build, and optimize real-time and batch data pipelines, leveraging AWS cloud technologies and Apache Pinot to enable high-performance analytics for our business. This role is ideal for engineers who are passionate about working with large-scale data and real-time processing. Responsibilities We Entrust You With Data Pipeline Development: Build and maintain robust ETL/ELT pipelines for batch and streaming data using tools like Apache Spark, Apache Flink, or AWS Glue. Develop real-time ingestion pipelines into Apache Pinot using streaming platforms like Kafka or Kinesis. Real-Time Analytics Configure and optimize Apache Pinot clusters for sub-second query performance and high availability. Design indexing strategies and schema structures to support real-time and historical data use cases. Cloud Infrastructure Management Work extensively with AWS services such as S3, Redshift, Kinesis, Lambda, DynamoDB, and CloudFormation to create scalable, cost-effective solutions. Implement infrastructure as code (IaC) using tools like Terraform or AWS CDK. Performance Optimization Optimize data pipelines and queries to handle high throughput and large-scale data efficiently. Monitor and tune Apache Pinot and AWS components to achieve peak performance. Data Governance & Security Ensure data integrity, security, and compliance with organizational and regulatory standards (e.g., GDPR, SOC2). Implement data lineage, access controls, and auditing mechanisms. Collaboration Work closely with data scientists, analysts, and other engineers to translate business requirements into technical solutions. Collaborate in an Agile environment, participating in sprints, standups, and retrospectives. Relevant Work Experience 4-12 years of hands-on experience in data engineering or related roles. Proven expertise with AWS services and real-time analytics platforms like Apache Pinot or similar technologies (e.g., Druid, ClickHouse). Proficiency in Python, Java, or Scala for data processing and pipeline development. Strong SQL skills and experience with both relational and NoSQL databases. Hands-on experience with streaming platforms such as Apache Kafka or AWS Kinesis. Familiarity with big data tools like Apache Spark, Flink, or Airflow. Strong problem-solving skills and a proactive approach to challenges. Excellent communication and collaboration abilities in cross-functional teams. Preferred Qualifications Experience with data lakehouse architectures (e.g., Delta Lake, Iceberg). Knowledge of containerization and orchestration tools (e.g., Docker, Kubernetes). Exposure to monitoring tools like Prometheus, Grafana, or CloudWatch. Familiarity with data visualization tools like Tableau or Superset. What We Offer Competitive compensation based on experience. Flexible work environment with opportunities for growth. Work on cutting-edge technologies and projects in data engineering and analytics. What We Value In Our People You take the shot: You Decide Fast and You Deliver Right You are the CEO of what you do: you show ownership and make things happen You own tomorrow: by building solutions for the merchants and doing the right thing You sign your work like an artist: You seek to learn and take pride in the work you do
Posted 1 week ago
4.0 - 12.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Job Description – BI Analyst- (Senior Engineer/ Lead) We at Pine Labs are looking for those who share our core belief - “Every Day is Game day”. We bring our best selves to work each day to realize our mission of enriching the world through the power of digital commerce and financial services. Role Purpose We are looking for a Sr. BI Analyst / Lead who will be Supporting BI Analysts team in implementation of a new dashboard features and writing complex SQL queries to get the raw data ready for dashboarding usage. Preferred candidate should have analytics mindset to convert raw data into user friendly and dynamic dashboards along with developing Paginated Reports. This is an Individual Contributor position who can lead the team from Technical front. Responsibilities We Entrust You With Participates in peer reviews of Reports/ Dashboards created by Internal team members and ensure high standard as per defined reporting/dashboarding standards. Designing Product thinking, Problem solving, Strategic Orientation Must have expertise on Apache SuperSet BI Tools and SSRS. Excellent skills for SSRS, SSIS and Expert in SQL Scripts. Nice to have, Sound knowledge on AWS QuickSight, Powershell Excellent SQL Scripting for complex queries Proficient in both verbal and non-verbal communication Knowledge in ETL Concept and tools e.g. Talend/SSIS Knowledge in Query Optimization in SQL and Redshift Nice to have, Sound knowledge on Data Warehousing and Data Lake Concepts Understands requirement of a Dashboard/Report from Management stake holders and has analytical view to design dynamic dashboards using any BI Analytics tool Required Skills : TSQL, ANSI SQL, PSQL, SSIS, SSRS, Apache Superset, AWS Redshift, QuickSight Good to have skills: Data Lake concepts Analytical Ability, Business and Merchant requirement understanding What Matters In This Role Apache Superset, AWS QuickSight, SSRS, SSIS for developing Dashboards is preferred Excellent TSQL, ANSI SQL, Data Modeling and Querying from multiple Data Stores is mandatory. Experience on Microsoft SSRS and SSIS is needed for developing Paginated Dashboards Experience- 4- 12 Years What We Value In Our People You take the shot: You Decide Fast and You Deliver Right You are the CEO of what you do: you show ownership and make things happen You own tomorrow: by building solutions for the merchants and doing the right thing You sign your work like an artist: You seek to learn and take pride in the work you do
Posted 1 week ago
3.0 - 5.0 years
5 - 7 Lacs
Bengaluru
Work from Office
Job Summary Synechron seeks a meticulous and experienced Data Test Automation Engineer to enhance our data validation and automation efforts. The role is pivotal in verifying data quality, ensuring system integrity, and automating testing within complex data pipelines, including ETL and Big Data environments. The ideal candidate will bring expertise in Selenium automation frameworks and hands-on experience with large-scale data systems, contributing to the reliability and scalability of our data analytics initiatives. This role enables data-driven decision-making and supports our advanced analytics and digital transformation objectives by ensuring high standards of data accuracy and system performance. Software Requirements Required: Selenium WebDriver (Java/Python bindings) mandatory for automation scripting SQL and data querying tools (Oracle, SQL Server, etc.) for validation and verification ETL tools (preferably Informatica, Talend, or equivalent) Big Data frameworks (Hadoop, Spark, or similar) basic understanding needed Data validation and testing tools (e.g., data compare tools, custom scripts) Preferred: Scripting languages: Python and Java for automation and scripting tasks Data warehousing tools such as Snowflake, Redshift, or similar Automation frameworks (e.g., TestNG, JUnit) Cloud platforms (AWS, Azure) knowledge to support data processing environments Overall Responsibilities Design and execute comprehensive test plans for data validation within ETL workflows and Big Data architectures Develop, maintain, and execute automated test scripts using Selenium to verify data integrity across multiple systems Validate data accuracy, completeness, and consistency across sources, staging, and target systems Collaborate with data engineers and developers to understand data workflows, identify testing needs, and address quality issues Document defects, track resolution, and communicate findings clearly to technical and non-technical stakeholders Perform performance testing on data pipelines to evaluate scalability and efficiency under load Maintain detailed testing documentation, including test cases, scripts, defect logs, and reports Continuously improve testing processes by integrating best practices for automation, data governance, and compliance standards Technical Skills (By Category) Programming Languages: Required: Java, Python (both essential for automation scripting and data validation) Preferred: Shell scripting, Scala (useful for Big Data environments) Databases/Data Management: Required: SQL proficiency for data validation tasks (querying, data comparison) Preferred: Experience with data warehousing and Data Lake architectures (e.g., Hadoop, Snowflake) Cloud Technologies: Preferred: Familiarity with cloud platforms supporting data pipelines (AWS, Azure) Frameworks and Libraries: Required: Selenium WebDriver, TestNG/JUnit for automation Preferred: Pytest, pandas for data processing and validation scripts Development Tools and Methodologies: Required: Version control (Git), Agile/Scrum workflows Preferred: Continuous Integration tools (Jenkins) Security Protocols: Not specific but must understand data security and governance standards relevant to testing environments Experience Requirements Minimum 3-5 years of experience in data testing, especially within ETL and Big Data environments Proven expertise in testing data workflows, validating large datasets, and automating test cases using Selenium Solid understanding of data warehousing concepts and architecture (data lakes, marts, warehouses) Prior experience in scripting and automation using Java and Python Familiarity with cloud-based data platforms and tools is a plus Experience with data governance, compliance standards, and performance testing of data pipelines Day-to-Day Activities Analyze project requirements and develop detailed test plans for data workflows Build and execute automated test scripts for data validation using Selenium Collaborate with data engineers to understand ETL and data pipeline architectures Conduct functional, regression, and performance testing of data pipelines Log defects, perform root cause analysis, and coordinate with the team for resolution Review test results, prepare status reports, and suggest improvements Support continuous integration and deployment pipelines to streamline data testing processes Stay updated on emerging data testing tools and industry best practices Qualifications Educational Background: Bachelors or higher degree in Computer Science, Information Technology, or related disciplines; equivalent industry experience acceptable Certifications: Relevant certifications in Data Testing, Big Data, or automation frameworks (preferred) Training in data governance, security, and compliance standards is advantageous Commitment to continuous professional development in data technologies and testing methodologies Professional Competencies Strong analytical and critical thinking skills for complex data validation Effective communication skills for engaging with cross-functional teams and stakeholders Ability to prioritize tasks and manage time efficiently in a fast-paced environment Team collaboration skills and adaptability to evolving project requirements Continuous learning mindset focused on emerging data tools and best practices Problem-solving orientation with attention to detail to identify and resolve data inconsistencies
Posted 1 week ago
4.0 - 7.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Job Summary We are seeking a highly skilled Sr. Data Engineer to understand the design, implementation, and optimization of scalable data architectures. The ideal candidate will have a deep understanding of data modeling, ETL processes, cloud data solutions, and big data technologies. You will work closely with cross-functional teams to build robust, high-performance data pipelines and infrastructure to enable data-driven decision-making. Experience: 4 - 7 years Work Location: Hyderabad (Hybrid) Mandatory skills: AWS, Python, SQL, Airflow, DBT Responsibilities Data Pipeline Development: Design and implement robust ETL/ELT processes to ensure efficient data ingestion, transformation, and storage. Big Data & Cloud Solutions: Architect data solutions using cloud platforms like AWS, Azure, or GCP, leveraging services such as Snowflake, Redshift, BigQuery, and Databricks. Database Optimization: Ensure performance tuning, indexing strategies, and query optimization for relational and NoSQL databases. Data Governance & Security: Implement best practices for data quality, metadata management, compliance (GDPR, CCPA), and security. Collaboration & Leadership: Work closely with data engineers, analysts, and business stakeholders to translate business requirements into scalable solutions. Technology Evaluation: Stay updated with emerging trends, assess new tools and frameworks, and drive innovation in data engineering. Required Skills Education: Bachelor's or Master's degree in Computer Science, Data Engineering, or a related field. Experience: 4 - 7years of experience in data engineering Cloud Platforms: Strong expertise in AWS data services. Big Data Technologies: Experience with Hadoop, Spark, Kafka, and related frameworks. Databases: Hands-on experience with SQL, NoSQL, and columnar databases such as PostgreSQL, MongoDB, Cassandra, and Snowflake. Programming: Proficiency in Python, Scala, or Java for data processing and automation. ETL Tools: Experience with tools like Apache Airflow, Talend, DBT, or Informatica. Machine Learning & AI Integration (Preferred): Understanding of how to architect data solutions for AI/ML application Skills: airflow,kafka,mongodb,cassandra,talend,snowflake,dbt,postgresql,java,python,data engineering,spark,sql,scala,hadoop,aws,informatica
Posted 1 week ago
5.0 years
20 - 25 Lacs
Bandra, Bihar, India
On-site
Job Title: Data Engineer – AWS, PySpark, Databricks Location: Mumbai (Bandra kurla complex ) Experience Required: 5+ Years Employment Type: Contract Job Overview We are seeking a highly skilled Data Engineer with strong hands-on experience in AWS cloud services , PySpark , and Databricks to join our advanced analytics and data platform team. The ideal candidate should have expertise in building scalable data pipelines, transforming large datasets, and managing cloud-based big data environments. You’ll work closely with data scientists, architects, and business teams to deliver efficient data solutions. Key Responsibilities Design, develop, and maintain robust ETL/ELT pipelines using PySpark on Databricks. Work with structured and semi-structured data on large-scale data platforms. Integrate data from various sources into the AWS data lake and Delta Lake. Optimize Spark jobs for performance, cost, and reliability. Collaborate with business stakeholders to understand data requirements and translate them into scalable solutions. Use AWS services (S3, Glue, Lambda, EMR, Redshift, Athena) to manage data pipelines and orchestrations. Write clean, maintainable, and reusable PySpark code with proper logging and error handling. Implement data governance, quality checks, and documentation practices. Perform data validation and reconciliation to ensure accuracy and consistency. Participate in code reviews, unit testing, and continuous integration/delivery processes. Required Skills & Qualifications 5+ years of experience in data engineering or big data development. Minimum 3+ years of experience working with PySpark and Apache Spark. Strong experience working on Databricks (Notebooks, Workflows, Delta Lake). Proficiency with AWS services: S3, Lambda, Glue, EMR, Redshift, and Athena. Expertise in working with large-scale data ingestion, transformation, and storage. Solid understanding of data partitioning, file formats (Parquet, ORC, JSON), and performance tuning in Spark. Experience in CI/CD practices, version control (Git), and deployment automation. Familiarity with Airflow, DBT, or other workflow orchestration tools is a plus. Strong communication and collaboration skills in agile environments. Good To Have Experience with Delta Lake architecture and advanced Databricks features. Knowledge of SQL optimization and data warehousing concepts. Experience integrating with BI tools like Power BI or Tableau. Exposure to machine learning pipelines or integration with MLflow. AWS certifications (e.g., AWS Certified Data Analytics, Solutions Architect). Skills: data governance,data quality,athena,aws,databricks,git,python,emr,lambda,dbt,etl,redshift,sql,airflow,ci/cd,data warehousing,pyspark,elt,glue,s3,spark
Posted 1 week ago
2.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Description Come help Amazon create cutting-edge data and science-driven technologies for delivering packages to the doorstep of our customers! The Last Mile Routing & Planning organization builds the software, algorithms and tools that make the “magic” of home delivery happen: our flow, sort, dispatch and routing intelligence systems are responsible for the billions of daily decisions needed to plan and execute safe, efficient and frustration-free routes for drivers around the world. Our team supports deliveries (and pickups!) for Amazon Logistics, Prime Now, Amazon Flex, Amazon Fresh, Lockers, and other new initiatives. As part of the Last Mile Science & Technology organization, you’ll partner closely with Product Managers, Data Scientists, and Software Engineers to drive improvements in Amazon's Last Mile delivery network. You will leverage data and analytics to generate insights that accelerate the scale, efficiency, and quality of the routes we build for our drivers through our end-to-end last mile planning systems. You will present your analyses, plans, and recommendations to senior leadership and connect new ideas to drive change. Analytical ingenuity and leadership, business acumen, effective communication capabilities, and the ability to work effectively with cross-functional teams in a fast paced environment are critical skills for this role. Responsibilities Create actionable business insights through analytical and statistical rigor to answer business questions, drive business decisions, and develop recommendations to improve operations Collaborate with Product Managers, software engineering, data science, and data engineering partners to design and develop analytic capabilities Define and govern key business metrics, build automated dashboards and analytic self-service capabilities, and engineer data-driven processes that drive business value Navigate ambiguity to develop analytic solutions and shape work for junior team members Basic Qualifications 2+ years of analyzing and interpreting data with Redshift, Oracle, NoSQL etc. experience Experience with data visualization using Tableau, Quicksight, or similar tools Experience with one or more industry analytics visualization tools (e.g. Excel, Tableau, QuickSight, MicroStrategy, PowerBI) and statistical methods (e.g. t-test, Chi-squared) Experience with scripting language (e.g., Python, Java, or R) Preferred Qualifications Master's degree, or Advanced technical degree Knowledge of data modeling and data pipeline design Experience with statistical analysis, co-relation analysis Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner. Company - ADCI - Karnataka Job ID: A2994624
Posted 1 week ago
2.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Description Transportation Financial Systems (TFS) owns the technology components that perform the financial activities for transportation business. These systems are used across all transportation programs and retail expansion to new geographies. TFS systems provide financial document creation & management, expense auditing, accounting, payments and cost allocation functions. Our new generation products are highly scalable and operate at finer level granularity to reconcile every dollar in transportation financial accounts with zero manual entries or corrections. The goal is to develop global product suite for all freight modes touching every single package movement across Amazon. Our mission is to abstract logistics complexities from financial world and financial complexities from logistics world. We are looking for an innovative, hands-on and customer-obsessed candidate for this role. Candidate must be detail oriented, have superior verbal and written communication skills, and should be able to juggle multiple tasks at once. The candidate must be able to make sound judgments and get the right things done. We seek a Business Intelligence (BI) Engineer to strengthen our data-driven decision-making processes. This role requires an individual with excellent statistical and analytical abilities, deep knowledge of business intelligence solutions and have the ability to strongly utilize the GenAI technologies to analyse and solving problem, able to collaborate with product, business & tech teams. The successful candidate will demonstrate the ability to work independently and learn quickly, quick comprehension of Transportation Finance system functions and have passion for data and analytics, be a self-starter comfortable with ambiguity, an ability to work in a fast-paced and entrepreneurial environment, and driven by a desire to innovate Amazon’s approach to this space. Key job responsibilities Translate business problems into analytical requirements and define expected output Develop and implement key performance indicators (KPIs) to measure business performance and product impact. Responsible for deep-dive analysis on key metrics. Create & execute analytical approach to solve the problem inline with stakeholder expectation Strongly leveraging GenAI technologies to solve problems and building solutions Be the domain expert and have knowledge of data availability from various sources. Execute solution with scalable development practices in scripting, write & optimize SQL queries, reporting, data extraction and data visualization. Proactively and independently work with stakeholders to construct use cases and associated standardized outputs for your work Actively manage the timeline and deliverables of projects, focusing on interactions in the team About The Team Transportation Financial Systems (TFS) owns the technology components that perform the financial activities for transportation business. These systems are used across all transportation programs and retail expansion to new geographies. TFS systems provide financial document creation & management, expense auditing, accounting, payments and cost allocation functions. Our new generation products are highly scalable and operate at finer level granularity to reconcile every dollar in transportation financial accounts with zero manual entries or corrections. The goal is to develop global product suite for all freight modes touching every single package movement across Amazon. Our mission is to abstract logistics complexities from financial world and financial complexities from logistics world. Basic Qualifications 2+ years of analyzing and interpreting data with Redshift, Oracle, NoSQL etc. experience Experience with data visualization using Tableau, Quicksight, or similar tools Experience with one or more industry analytics visualization tools (e.g. Excel, Tableau, QuickSight, MicroStrategy, PowerBI) and statistical methods (e.g. t-test, Chi-squared) Experience with scripting language (e.g., Python, Java, or R) Preferred Qualifications Master's degree, or Advanced technical degree Knowledge of data modeling and data pipeline design Experience with statistical analysis, co-relation analysis Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner. Company - ADCI HYD 13 SEZ Job ID: A3003955
Posted 1 week ago
5.0 years
7 - 8 Lacs
Hyderābād
On-site
JOB DESCRIPTION: We are seeking a skilled Data Engineer with over 5+ years of experience to design, build, and maintain scalable data pipelines and perform advanced data analysis to support business intelligence and data-driven decision-making. The ideal candidate will have a strong foundation in computer science principles, extensive experience with SQL and big data tools, and proficiency in cloud platforms and data visualization tools. Responsibilities Key Responsibilities: Exp: 5+ years Design, develop, and maintain robust, scalable ETL pipelines using Apache Airflow, DBT, Composer (GCP), Control-M, Cron, Luigi, and similar tools. Build and optimize data architectures including data lakes and data warehouses. Integrate data from multiple sources ensuring data quality and consistency. Collaborate with data scientists, analysts, and stakeholders to translate business requirements into technical solutions. Analyze complex datasets to identify trends, generate actionable insights, and support decision-making. Develop and maintain dashboards and reports using Tableau, Power BI, and Jupyter Notebooks for visualization and pipeline validation. Manage and optimize relational and NoSQL databases such as MySQL, PostgreSQL, Oracle, MongoDB, and DynamoDB. Work with big data tools and frameworks including Hadoop, Spark, Hive, Kafka, Informatica, Talend, SSIS, and Dataflow. Utilize cloud data services and warehouses like AWS Glue, GCP Dataflow, Azure Data Factory, Snowflake, Redshift, and BigQuery. Support CI/CD pipelines and DevOps workflows using Git, Docker, Terraform, and related tools. Ensure data governance, security, and compliance standards are met. Participate in Agile and DevOps processes to enhance data engineering workflows. Requirements Required Qualifications: 5+ years of professional experience in data engineering and data analysis roles. Strong proficiency in SQL and experience with database management systems such as MySQL, PostgreSQL, Oracle, and MongoDB. Hands-on experience with big data tools like Hadoop and Apache Spark. Proficient in Python programming. Experience with data visualization tools such as Tableau, Power BI, and Jupyter Notebooks. Proven ability to design, build, and maintain scalable ETL pipelines using tools like Apache Airflow, DBT, Composer (GCP), Control-M, Cron, and Luigi. Familiarity with data engineering tools including Hive, Kafka, Informatica, Talend, SSIS, and Dataflow. Experience working with cloud data warehouses and services (Snowflake, Redshift, BigQuery, AWS Glue, GCP Dataflow, Azure Data Factory). Understanding of data modeling concepts and data lake/data warehouse architectures. Experience supporting CI/CD practices with Git, Docker, Terraform, and DevOps workflows. Knowledge of both relational and NoSQL databases, including PostgreSQL, BigQuery, MongoDB, and DynamoDB. Exposure to Agile and DevOps methodologies. Experience with Amazon Web Services (S3, Glue, Redshift, Lambda, Athena) Nice to have Preferred Skills: Strong problem-solving and communication skills. Ability to work independently and collaboratively in a team environment. Experience with service development, REST APIs, and automation testing is a plus. Familiarity with version control systems and workflow automation. We offer Opportunity to work on bleeding-edge projects Work with a highly motivated and dedicated team Competitive salary Flexible schedule Benefits package - medical insurance, sports Corporate social events Professional development opportunities Well-equipped office About us Grid Dynamics (NASDAQ: GDYN) is a leading provider of technology consulting, platform and product engineering, AI, and advanced analytics services. Fusing technical vision with business acumen, we solve the most pressing technical challenges and enable positive business outcomes for enterprise companies undergoing business transformation. A key differentiator for Grid Dynamics is our 8 years of experience and leadership in enterprise AI, supported by profound expertise and ongoing investment in data, analytics, cloud & DevOps, application modernization and customer experience. Founded in 2006, Grid Dynamics is headquartered in Silicon Valley with offices across the Americas, Europe, and India.
Posted 1 week ago
40.0 years
4 - 8 Lacs
Hyderābād
On-site
India - Hyderabad JOB ID: R-219223 ADDITIONAL LOCATIONS: India - Hyderabad WORK LOCATION TYPE: On Site DATE POSTED: Jun. 30, 2025 CATEGORY: Information Systems BI Architect ABOUT AMGEN Amgen harnesses the best of biology and technology to fight the world’s toughest diseases, and make people’s lives easier, fuller and longer. We discover, develop, manufacture and deliver innovative medicines to help millions of patients. Amgen helped establish the biotechnology industry more than 40 years ago and remains on the cutting-edge of innovation, using technology and human genetic data to push beyond what’s known today. ABOUT THE ROLE Role Description: We are seeking an experienced BI Architect with expertise in Databricks, Spotfire (Tableau and Power BI secondary), AWS, and enterprise business intelligence (BI) solutions to design and implement scalable, high-performance BI architectures. This role will focus on data modeling, visualization, governance, self-service BI enablement, and cloud-based BI solutions, ensuring efficient, data-driven decision-making across the organization. The ideal candidate will have strong expertise in BI strategy, data engineering, data warehousing, semantic layer modeling, dashboarding, and performance optimization, working closely with data engineers, business stakeholders, and leadership to drive BI adoption and enterprise analytics excellence. Preferred Candidate would have extensive Spotfire experience followed by Power BI or Tableau. Roles & Responsibilities: Design and develop enterprise BI architectures and implement the architectural vision for TIBCO Spotfire at the enterprise level hosted in AWS Partner with data engineers and architects to ensure optimal data modeling, caching, and query performance in Spotfire Design scalable, secure, and high-performance Spotfire environments, including multi-node server setups and hybrid cloud integrations. Develop reusable frameworks and templates for dashboards, data models, and automation processes. Optimize BI query performance, indexing, partitioning, caching, and report rendering to enhance dashboard responsiveness and data refresh speed. Implement real-time and batch data integration strategies, ensuring smooth data flow from APIs, ERP/CRM systems (SAP, Salesforce, Dynamics 365), cloud storage, and third-party data sources into BI solutions. Establish and enforce BI governance best practices, including data cataloging, metadata management, access control, data lineage tracking, and compliance standards. Troubleshoot interactive dashboards, paginated reports, and embedded analytics solutions that deliver actionable insights. Implement DataOps and CI/CD pipelines for BI, leveraging Deployment Pipelines, Git integration, and Infrastructure as Code (IaC) to enable version control and automation. Stay up to date with emerging BI technologies, cloud analytics trends, and AI/ML-powered BI solutions to drive innovation. Collaborate with business leaders, data analysts, and engineering teams to ensure BI adoption, self-service analytics enablement, and business-aligned KPIs. Provide mentorship and training to BI developers, analysts, and business teams, fostering a data-driven culture across the enterprise. Must-Have Skills: Experience in BI architecture, data analytics, AWS, and enterprise BI solution development Strong expertise in Spotfire including information links, Spotfire Analyst, Spotfire Server, and Spotfire Web Player Hands-on experience with Databricks (Apache Spark, Delta Lake, SQL, PySpark) for data processing, transformation, and analytics. Experience in scripting and extensions Python or R Expertise in BI strategy, KPI standardization, and enterprise data modeling, including dimensional modeling, star schema, and data virtualization. Hands-on experience with cloud BI solutions and enterprise data warehouses, such as Azure Synapse, AWS Redshift, Snowflake, Google BigQuery, or SQL Server Analysis Services (SSAS). Experience with BI governance, access control, metadata management, data lineage, and regulatory compliance frameworks. Expertise in Agile BI development, Scaled Agile (SAFe), DevOps for BI, and CI/CD practices for BI deployments. Ability to collaborate with C-level executives, business units, and engineering teams to drive BI adoption and data-driven decision-making. Good-to-Have Skills: Experience with Tibco Spotfire Lead Discovery Knowledge of AI-powered BI, natural language processing (NLP) in BI, and automated machine learning (AutoML) for analytics. Experience with multi-cloud BI architectures and federated query solutions using Power BI Tableau. Understanding of GraphQL, REST APIs, and data mesh principles for enterprise data access in BI. Knowledge of AI/ML pipeline integration within enterprise data architectures. Education and Professional Certifications Doctorate Degree with 8-10 + years of experience in Computer Science, IT or related field OR Master’s degree with 10 - 14 + years of experience in Computer Science, IT or related field OR Bachelor’s degree with 12-17 + years of experience in Computer Science, IT or related field Tibco Spotfire Certifications Power BI Certifications Tableau Certifications Soft Skills: Excellent analytical and troubleshooting skills. Strong verbal and written communication skills Ability to work effectively with global, virtual teams High degree of initiative and self-motivation. Ability to manage multiple priorities successfully. Team-oriented, with a focus on achieving team goals. Ability to learn quickly, be organized and detail oriented. Strong presentation and public speaking skills. EQUAL OPPORTUNITY STATEMENT Amgen is an Equal Opportunity employer and will consider you without regard to your race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, or disability status. We will ensure that individuals with disabilities are provided with reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request an accommodation.
Posted 1 week ago
5.0 years
10 Lacs
Hyderābād
On-site
To get the best candidate experience, please consider applying for a maximum of 3 roles within 12 months to ensure you are not duplicating efforts. Job Category Software Engineering Job Details About Salesforce We’re Salesforce, the Customer Company, inspiring the future of business with AI+ Data +CRM. Leading with our core values, we help companies across every industry blaze new trails and connect with customers in a whole new way. And, we empower you to be a Trailblazer, too — driving your performance and career growth, charting new paths, and improving the state of the world. If you believe in business as the greatest platform for change and in companies doing well and doing good – you’ve come to the right place. Salesforce is looking for a Senior software engineer to join the Trailhead team. Trailhead is an online learning platform created by Salesforce with a big, bold mission to democratize education and skill up anyone for the future of work. The Trailhead team has immediate opportunities for talented software engineers who want to make a significant and measurable positive impact to users, the company’s bottom line and the industry. Trailhead is where developers, admins, and business users get the skills they need for the jobs of the future. And thanks to gamification they have a little fun along the way. This is a rare opportunity to build something that positively impacts millions of users helping folks develop new skills and break into new careers. Feel free to explore our app, trailhead.salesforce.com , and maybe even snag a few badges (we'd recommend the Introduction to Agentforce module)! Bonus points if you download the Trailhead GO app from the App Store and earn the badge on mobile! The team focuses on understanding our Trailblazers’ career needs and optimising their learning journey. We build solutions across product and marketing based on the full point of view of the Trailblazer to cultivate more credentialed, employable individuals in the Salesforce ecosystem. We multiply our efforts across the Trailhead marketing, engineering, content, and credentialing teams to align our strategies and change the culture to use data to make decisions. In this role, you will be work on building data pipelines, optimizing, and delivering data for core Trailhead KPIs. You will also contribute to setting the vision for and delivering the future of Trailhead core analytical funnel metrics and user behavior tracking/experiments. You will work on high impact and high visibility projects that are used by Salesforce executives. You will be encouraged to leverage and implement the latest Salesforce products and technologies. In addition, you will often be challenged to solve for ad-hoc/unstructured problems in a highly fast-paced environment and to partner with key stakeholders across teams. Equality is a core value at Salesforce. We strive to create workplaces that reflect the communities we serve and where everyone feels empowered to bring their full, authentic selves to work. People of different backgrounds, experiences, abilities, and perspectives are warmly encouraged to apply. Responsibilities Build & maintain pipelines – Develop Airflow workflows to ingest data from S3, APIs, and Kafka into Snowflake, ensuring reliability and scalability. Define data contracts & governance – Align with source teams on schemas/SLAs and enforce data classification, masking, and privacy standards. Model for analytics – Create well-structured fact/dimension tables and business measures that power self-service dashboards. Safeguard data quality & lineage – Automate tests, monitoring, and lineage tracking to surface issues early and expedite root-cause analysis. Enable collaboration & learning – Partner with analysts and data scientists, document data definitions, and share best practices across the team. About You Collaborative team player who is kind, friendly, and cares about doing the right thing Desire to keep learning and growing, both technically and otherwise, and keeping informed of new data engineering methods and techniques Ability to ask good questions and learn quickly Openness and courage to give and receive feedback Respect towards people from diverse backgrounds and commitment to upholding diversity, equity, and inclusion at work Some Qualifications We Look For B.S/M.S. in Computer Sciences or equivalent field, and 5+ years of relevant experience within big data engineering Excellent understanding of data structures and distributed data processing patterns Experience with many of the following: Implementing and operating big data technologies like Redshift, Hadoop, Spark, Presto, Hive, etc. especially in the evolving areas of security, compliance (GDPR/CCPA/Data Privacy), and data retention Cloud computing and data processing, preferably AWS, security, cluster sizing, and performance tuning ETL design and implementing pipelines in languages like Java, Scala or scripting in Python Hands on experience with Airflow, CI/CD pipelines via Jenkins or similar tools, GitHub Well versed with Snowflake/Google BigQuery/Redshift. Version control systems (Github, Stash, etc..) and deployment tools Implementing and managing Python open-source data orchestration tools such as Airflow, Pandas, etc Experience working with Web analytics platforms, metrics, and data sets (Google Analytics preferred) Plusses Salesforce experience/ certification is a plus but not required Heroku app development experience is a plus but not required Data Cloud experience is a plus but not required Accommodations If you require assistance due to a disability applying for open positions please submit a request via this Accommodations Request Form . Posting Statement Salesforce is an equal opportunity employer and maintains a policy of non-discrimination with all employees and applicants for employment. What does that mean exactly? It means that at Salesforce, we believe in equality for all. And we believe we can lead the path to equality in part by creating a workplace that’s inclusive, and free from discrimination. Know your rights: workplace discrimination is illegal. Any employee or potential employee will be assessed on the basis of merit, competence and qualifications – without regard to race, religion, color, national origin, sex, sexual orientation, gender expression or identity, transgender status, age, disability, veteran or marital status, political viewpoint, or other classifications protected by law. This policy applies to current and prospective employees, no matter where they are in their Salesforce employment journey. It also applies to recruiting, hiring, job assignment, compensation, promotion, benefits, training, assessment of job performance, discipline, termination, and everything in between. Recruiting, hiring, and promotion decisions at Salesforce are fair and based on merit. The same goes for compensation, benefits, promotions, transfers, reduction in workforce, recall, training, and education.
Posted 1 week ago
0 years
0 Lacs
Hyderābād
On-site
Genpact (NYSE: G) is a global professional services and solutions firm delivering outcomes that shape the future. Our 125,000+ people across 30+ countries are driven by our innate curiosity, entrepreneurial agility, and desire to create lasting value for clients . Powered by our purpose – the relentless pursuit of a world that works better for people – we serve and transform leading enterprises, including the Fortune Global 500, with our deep business and industry knowledge, digital operations services, and expertise in data, technology, and AI. Inviting applications for the role of Consultant- Data Engineer- Databricks! In this role, the Databricks Developer is responsible for solving the real world cutting edge problem to meet both functional and non-functional requirements. You would be part of the data integrity/analysis team in the Banking and financial domain. You will be responsible to independently build data analysis around complex business problems from data available in the client owned or accessible systems. For these tasks, you would be encouraged to understand the business ask/problem, assess the scope, quantity and quality of the available data, prepare and build the code using Pyspark /Databricks, Python programming and loading data in to DWH and Data Mart for downstream consumption team. Responsibilities Extensive hands-on experience on Python ( Pyspark ) and Pyspark with SQL The experience shall be to carry RDDs, Struct types and more on pyspark Exposure to work on Databricks notebook for Pyspark and pyspark with sql coding Good hands on to collaborate with AWS services using Python. Experience with cloud technologies like AWS (S3, Redshift,SNS ) Expertise in developing ETL and batch processes to support data movement Candidate shall be good in communication and SELF - Driven May work in silos with his own deliverables and discussion points with onshore customer. Qualifications we seek in you! Minimum Qualifications / Skills Degree [BE, B .sc . ] Preferred Qualifications Candidate must have good communication skills and client handling Genpact is an Equal Opportunity Employer and considers applicants for all positions without regard to race, color, religion or belief, sex, age, national origin, citizenship status, marital status, military/veteran status, genetic information, sexual orientation, gender identity, physical or mental disability or any other characteristic protected by applicable laws. Genpact is committed to creating a dynamic work environment that values respect and integrity, customer focus, and innovation. Get to know us at genpact.com and on LinkedIn, X, YouTube, and Facebook. Furthermore, please do note that Genpact does not charge fees to process job applications and applicants are not required to pay to participate in our hiring process in any other way. Examples of such scams include purchasing a 'starter kit,' paying to apply, or purchasing equipment or training. Job Consultant Primary Location India-Hyderabad Schedule Full-time Education Level Bachelor's / Graduation / Equivalent Job Posting Jul 1, 2025, 4:16:06 AM Unposting Date Ongoing Master Skills List Digital Job Category Full Time
Posted 1 week ago
2.0 years
6 - 8 Lacs
Hyderābād
On-site
- 2+ years of analyzing and interpreting data with Redshift, Oracle, NoSQL etc. experience - Experience with data visualization using Tableau, Quicksight, or similar tools - Experience with one or more industry analytics visualization tools (e.g. Excel, Tableau, QuickSight, MicroStrategy, PowerBI) and statistical methods (e.g. t-test, Chi-squared) - Experience with scripting language (e.g., Python, Java, or R) The Global Real Estate and Facilities (GREF) Technology team, part of Finance Automation, is the software development and data and analytics team for GREF. Our mission is to build technology solutions that simplify the processes Amazon employs to manage its corporate real estate and improve workplace experience. We build services and products that support various GREF domains, including space and occupancy planning, design and construction, employee health and safety, sustainability, facilities maintenance, help desk, and building management systems requiring IoT expertise. If you possess the skills to build, operate, and scale the next generation of distributed systems supporting Amazon's growth, this role may be well-suited for you. We are seeking a passionate Business Intelligence Engineer (BIE) to create the next generation of real estate systems and tools. You will play a crucial role in driving the development and implementation of advanced business intelligence and analytics solutions. You will leverage your strong technical expertise and analytical skills to empower GREF's cross-functional teams with actionable data insights that support strategic decision-making. The ideal candidate is a self-starter, comfortable with ambiguity, and excels at building highly scalable solutions. They will enjoy learning, implementing new technologies, and will be fearless in implementing them in suitable situations. A Business Intelligence Engineer (BIE) at Amazon works on real world problems on a global scale, owns their systems end to end and influences the direction of our technology that impacts hundreds of millions of customers around the world. Coming to Amazon gives you the opportunity to work on development teams in one of our many rapidly growing organizations. Come join us in making history! Key job responsibilities • Design and develop robust, scalable, and high-performance BI architectures and data models to address GREF's business intelligence requirements • Build visually engaging, interactive dashboards and reports that provide valuable insights to GREF stakeholders using Amazon QuickSight • Collaborate closely with data engineers, data scientists, and business partners to define and deliver impactful BI solutions that drive value for the GREF organization • Diving deep into large data sets to answer specific business questions using Excel, SQL and other data manipulation languages • Troubleshoot data, analyses, and code, research root causes, propose solutions, and take ownership in next steps for their resolutions • Continually evaluate and recommend new BI tools and technologies to enhance GREF's analytical capabilities Master's degree, or Advanced technical degree Knowledge of data modeling and data pipeline design Experience with statistical analysis, co-relation analysis Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner.
Posted 1 week ago
5.0 - 10.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
AWS Databricks Data Engineer Work Location- Hyderabad Years of Exp-5 to 10 years Work Mode-Work From Office Notice Period- Immediate TO 15 Days Only Role Description This is a full-time on-site role for an AWS Data Engineer located in Hyderabad. The AWS Data Engineer will be responsible for designing and implementing data pipelines, developing and maintaining data models, performing data extraction, transformation, and loading (ETL) processes, managing data warehousing solutions, and analyzing data to support business objectives. Day-to-day tasks will include collaborating with cross-functional teams to gather requirements and ensure data architecture aligns with organizational needs. Mandatory Skills: Skillset-AWS,Python,Pyspark,SQL,(Databricks) Good to have- Airflow. Qualifications Data Engineering and Data Modeling skills Experience with Extract Transform Load (ETL) processes Proficiency in Data Warehousing solutions Data Analytics skills Strong problem-solving skills and attention to detail Excellent communication and teamwork skills Experience with AWS data services, such as Redshift, S3, and Glue is beneficial Bachelor’s degree in Computer Science, Information Technology, or related field
Posted 1 week ago
2.0 years
2 - 2 Lacs
India
On-site
PompeeVFX Animation Studios is looking for a skilled and creative 3D Generalist with 2+ years of professional experience to join our growing team. This role requires strong expertise in modeling, texturing, lighting, and rendering, along with a keen eye for detail and storytelling. * Create high-quality 3D assets, including modeling, texturing, rigging, and animation, as per project requirements. Work on various aspects of 3D production, such as lighting, rendering, and compositing. Collaborate closely with the creative team to meet project deadlines and ensure visual consistency. Troubleshoot and solve technical issues related to 3D workflows. Stay updated on industry trends and incorporate best practices into the pipeline requirement. Experience: Minimum 1 year of experience as a 3D Generalist. Skills: Proficiency in industry-standard tools like Blender, Maya, 3ds Max, or Houdini,~ Strong knowledge of texturing tools like Substance Painter or Photoshop. Familiarity with rendering engines like Arnold, Redshift, or Unreal Engine. Basic understanding of compositing in After Effects or Nuke. Ability to work in a fast-paced environment and adapt to project changes. Portfolio: A strong portfolio showcasing your 3D work is required. EXPERIANCE OF ONE TO TWO YEARS IS REQIRED TO APPLY FOR THIS POST. Job Types: Full-time, Permanent Pay: ₹18,000.00 - ₹22,000.00 per month Benefits: Paid sick time Schedule: Day shift Supplemental Pay: Overtime pay Performance bonus Work Location: In person Application Deadline: 05/07/2025
Posted 1 week ago
5.0 years
0 Lacs
Haryana
On-site
CURRENT EMPLOYEES, CONSULTANTS, AND AGENCY PARTNERS: If you currently work for Brown-Forman, please apply by clicking the Careers icon on the Workday portal. For best results, use Google Chrome to view this page. The Senior Data Engineer will lead the design, development, and optimization of data architectures and pipelines. This role involves mentoring junior engineers and collaborating with cross-functional teams to deliver high-quality data solutions that drive business insights and informed decision-making. Lead the development and maintenance of scalable data architectures and pipelines. Design and implement data models and schemas to support business intelligence and analytics. Optimize data processing workflows for performance, scalability, and reliability. Mentor and guide junior data engineers, providing technical leadership and best practices. Collaborate with data scientists, analysts, and business stakeholders to understand data requirements and deliver solutions. Ensure data quality and integrity through rigorous testing and validation processes. Execute steady state operating and monitoring procedures for our data warehouse and periodic 24x7 on-call support as necessary What you bring to the table: Bachelor’s Degree in Computer Science, Information Technology, or a related field. 5+ years of experience in data engineering or a related field. Advanced proficiency in SQL and experience with relational and NoSQL databases. Expertise in big data technologies (e.g., Hadoop, Spark) and data pipeline tools (e.g., Apache Nifi, Airflow). Strong programming skills in languages such as Python, Java, or Scala. Experience with data warehousing solutions such as Redshift, BigQuery, Snowflake, or similar. Experience with SAP Business Warehouse (BW) and ABAP Strong problem-solving abilities and attention to detail. Fluent in English with excellent communication skills and the ability to work effectively in a collaborative environment. What Makes You Unique Experience with cloud-based data solutions (e.g., AWS, Azure, GCP). Knowledge of ETL tools (e.g., Talend, Boomi, Informatica). Familiarity with data governance and data security best practices. Experience with data visualization tools (e.g., Tableau, Power BI, Looker). Requisition Type: Employee Management Level: Professional Global Job Level: P6 Number of Openings Available: 0
Posted 1 week ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
27294 Jobs | Dublin
Wipro
13935 Jobs | Bengaluru
EY
9574 Jobs | London
Accenture in India
8669 Jobs | Dublin 2
Amazon
7820 Jobs | Seattle,WA
Uplers
7606 Jobs | Ahmedabad
IBM
7142 Jobs | Armonk
Oracle
6920 Jobs | Redwood City
Muthoot FinCorp (MFL)
6164 Jobs | New Delhi
Capgemini
5313 Jobs | Paris,France