Home
Jobs

2254 Redshift Jobs - Page 10

Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
Filter
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

6.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

Our client is a leading mobile marketing and audience platform that empowers the app ecosystem through cutting-edge solutions in mobile marketing, audience building, and monetization. With direct integration into over 500,000 monthly active mobile apps, leverages global first-party data to unlock valuable insights, predict behaviors, and drive growth. We are looking for an experienced and innovative Senior Business Analyst to join their Operational Department. Job Descriptions: Key Responsibilities: • Cross-Functional Collaboration: Act as a key analytics partner for business, product, and R&D teams, aligning projects with strategic goals. • Data Analysis & Insights: Design and execute analytics projects, including quantitative analysis, statistical modeling, automated monitoring tools, and advanced data insights. • Business Opportunity Identification: Leverage our client's extensive first-party data to identify trends, predict behaviors, and uncover growth opportunities. • Strategic Reporting: Create impactful dashboards, reports, and presentations to communicate insights and recommendations to stakeholders at all levels. • Innovation: Drive the use of advanced analytics techniques, such as machine learning and predictive modeling, to enhance decision-making processes. Requirements: • Experience: 6+ years as a Data Analyst (or similar role) in media, marketing, or a related industry • Technical Skills: Proficiency in SQL, and Excel, with experience working with large datasets and big data tools (e.g., Vertica, Redshift, Hadoop, Spark). Familiarity with BI and visualization tools (e.g., Tableau, MicroStrategy). • Analytical Expertise: Strong problem-solving skills, statistical modelling knowledge, and familiarity with predictive analytics and machine learning algorithms. • Strategic Thinking: Ability to align data insights with business objectives, demonstrating creativity and out-of-the-box thinking. • Soft Skills: Proactive, independent, collaborative, and results-driven with excellent communication skills in English. Educational Background: BSc in Industrial Engineering, Computer Science, Mathematics, or a related field (MSc/MBA is an advantage). *** Only candidates residing in Bangalore will be considered.

Posted 5 days ago

Apply

3.0 - 5.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Linkedin logo

Business Intelligence Analyst (Minimum Three Years Of Total Experience) We are seeking an experienced and highly skilled Business Intelligence Analyst who is passionate about transforming innovative ideas into effective, data-driven solutions. In this role, you will be responsible for designing and delivering business intelligence solutions, which include ETL (Extract, Transform, Load) processes, data visualization, and data analysis. You will collaborate with cross-functional teams, including IT, Finance, and business stakeholders, to understand their requirements, design scalable and efficient solutions, and ensure the availability of insightful visualizations and reports. Your expertise will play a crucial role in driving data-driven decision-making and empowering the organization with actionable insights. Success in this role will require creativity and the ability to work independently to analyze sales data and create business intelligence dashboards. This position entails complete end-to-end project ownership, which includes conducting discovery meetings with stakeholders, leveraging existing SQL queries, building new SQL queries, creating ETL processes using AWS Glue and Lambda, working with AWS Redshift, developing automation workflows, and utilizing data visualization tools such as AWS QuickSight and Power BI. Key Responsibilities Visualization and Reporting : Design and develop interactive dashboards, reports, and visualizations that provide actionable insights to business stakeholders using industry-leading tools and technologies. Data Analysis : Analyze data, identify insights, and collaborate with sales leaders to recommend business actions. Solution Design : Work closely with business stakeholders to understand their requirements and translate them into comprehensive and scalable business intelligence solutions. Data Asset Creation : Leverage internal data warehouses and external datasets to build new data assets for analysis and visualization. ETL Development : Design and implement ETL processes to extract, transform, and load data, focusing on automation, accuracy, and reusability. Collaboration and Stakeholder Management : Collaborate with cross-functional teams, including data analysts and sales leaders, to understand requirements, gather feedback, and ensure the successful delivery of solutions. Documentation and Training : Create comprehensive documentation of BI solutions (including ETL processes and visualizations) and provide training and support to users and stakeholders on the effective use of business intelligence dashboards and analytics. We recognize that skills and competencies can manifest in various ways and may stem from diverse life experiences. If you do not meet all the listed requirements, we still encourage you to apply for the position. Qualifications The ideal candidate will have: Excellent problem-solving and analytical skills, with the ability to apply knowledge and creativity to resolve complex issues. Strong thought leadership and a quick understanding of how data and insights can be transformed into valuable features. Experience with: Data Visualization Tools: AWS QuickSight (experience with Power BI or similar tools is also acceptable). ETL Tools: AWS Glue and AWS Lambda. Databases: SQL programming (experience with PostgreSQL and AWS Redshift). Exceptional project management skills, with the ability to organize and prioritize multiple tasks effectively. Strong interpersonal skills and the ability to collaborate with partners from various business units and levels within the organization. 3-5 years of experience in business intelligence, analytics, or related roles. Minimum 2 Year of hand-on experience on AWS Glue, Lambda, RDS, RedShift, S3 A BS or MS degree in Engineering, Data & Analytics, Information Systems, or a related field (a master's degree is a plus).

Posted 5 days ago

Apply

5.0 - 8.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Linkedin logo

Business Intelligence Analyst (Minimum Five Years Of Total Experience) We are seeking an experienced and highly skilled Business Intelligence Analyst who is passionate about transforming innovative ideas into effective, data-driven solutions. In this role, you will be responsible for designing and delivering business intelligence solutions, which include ETL (Extract, Transform, Load) processes, data visualization, and data analysis. You will collaborate with cross-functional teams, including IT, Finance, and business stakeholders, to understand their requirements, design scalable and efficient solutions, and ensure the availability of insightful visualizations and reports. Your expertise will play a crucial role in driving data-driven decision-making and empowering the organization with actionable insights. Success in this role will require creativity and the ability to work independently to analyze sales data and create business intelligence dashboards. This position entails complete end-to-end project ownership, which includes conducting discovery meetings with stakeholders, leveraging existing SQL queries, building new SQL queries, creating ETL processes using AWS Glue and Lambda, working with AWS Redshift, developing automation workflows, and utilizing data visualization tools such as AWS QuickSight and Power BI. Key Responsibilities Visualization and Reporting : Design and develop interactive dashboards, reports, and visualizations that provide actionable insights to business stakeholders using industry-leading tools and technologies. Data Analysis : Analyze data, identify insights, and collaborate with sales leaders to recommend business actions. Solution Design : Work closely with business stakeholders to understand their requirements and translate them into comprehensive and scalable business intelligence solutions. Data Asset Creation : Leverage internal data warehouses and external datasets to build new data assets for analysis and visualization. ETL Development : Design and implement ETL processes to extract, transform, and load data, focusing on automation, accuracy, and reusability. Collaboration and Stakeholder Management : Collaborate with cross-functional teams, including data analysts and sales leaders, to understand requirements, gather feedback, and ensure the successful delivery of solutions. Documentation and Training : Create comprehensive documentation of BI solutions (including ETL processes and visualizations) and provide training and support to users and stakeholders on the effective use of business intelligence dashboards and analytics. We recognize that skills and competencies can manifest in various ways and may stem from diverse life experiences. If you do not meet all the listed requirements, we still encourage you to apply for the position. Qualifications The ideal candidate will have: Excellent problem-solving and analytical skills, with the ability to apply knowledge and creativity to resolve complex issues. Strong thought leadership and a quick understanding of how data and insights can be transformed into valuable features. Experience with: Data Visualization Tools: AWS QuickSight (experience with Power BI or similar tools is also acceptable). ETL Tools: AWS Glue and AWS Lambda. Databases: SQL programming (experience with PostgreSQL and AWS Redshift). Exceptional project management skills, with the ability to organize and prioritize multiple tasks effectively. Strong interpersonal skills and the ability to collaborate with partners from various business units and levels within the organization. 5-8 years of experience in business intelligence, analytics, or related roles. A BS or MS degree in Engineering, Data & Analytics, Information Systems, or a related field (a master's degree is a plus).

Posted 5 days ago

Apply

5.0 - 7.0 years

9 - 12 Lacs

Hyderabad, Chennai, Bengaluru

Hybrid

Naukri logo

Hiring Data Engineers with 3+ yrs in Databricks, PySpark, Delta Lake, and AWS (S3, Glue, Redshift, Lambda, EMR). Must have strong SQL/Python, CI/CD, and data pipeline experience. Only Tier-1 company backgrounds are considered.

Posted 5 days ago

Apply

10.0 years

0 Lacs

Indore, Madhya Pradesh, India

On-site

Linkedin logo

Role: Lead Data Engineer Location: Indore Experience required : 10+ Years Job Description: Build and maintain data pipelines for ingesting and processing structured and unstructured data. Ensure data accuracy and quality through validation checks and sanity reports. Improve data infrastructure by automating manual processes and scaling systems. Support internal teams (Product, Delivery, Onboarding) with data issues and solutions. Analyze data trends and provide insights to inform key business decisions. Collaborate with program managers to resolve data issues and maintain clear documentation. Must-Have Skills: Proficiency in SQL, Python (Pandas, NumPy), and R Experience with ETL tools (e.g., Apache NiFi, Talend, AWS Glue) Cloud experience with AWS (S3, Redshift, EMR, Athena, RDS) Strong understanding of data modeling, warehousing, and data validation Familiarity with data visualization tools (Tableau, Power BI, Looker) Experience with Apache Airflow, Kubernetes, Terraform, Docker Knowledge of data lake architectures, APIs, and custom data formats (JSON, XML, YAML)

Posted 5 days ago

Apply

0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Description Roles and responsibilities Design AWS architectures based on business requirements. Create architectural diagrams and documentation. Present cloud solutions to stakeholders. Experience: 5-10 yrs Skills and Qualifications: Design, develop, and maintain scalable ETL/ELT pipelines using AWS services like Glue, Lambda, and Step Functions. Work with batch and real-time data processing using AWS Glue, Kinesis, Kafka, or Apache Spark. Optimize data pipelines for performance, scalability, and cost-effectiveness. Identify bottlenecks and optimize query performance on Redshift, Athena, and Glue. Strong knowledge of AWS services: EC2, S3, RDS, Lambda, IAM, VPC, CloudFormation, CloudWatch, etc. Experience with serverless architectures (AWS Lambda, API Gateway, Step Functions). Experience of AWS networking (VPC, Route 53, ELB, Security Groups, etc.). Experience with AWS CloudFormation for automating infrastructure. Proficiency in scripting languages such as Python or Bash. Experience with automation tools (AWS Systems Manager, AWS Lambda) Experience of containerization (Docker, Kubernetes, AWS ECS, EKS, Fargate). Experience with AWS CloudWatch, AWS X-Ray, ELK Stack, or third-party monitoring tools. Experience with AWS database services (RDS, DynamoDB, Aurora, Redshift). Experience of storage solutions (S3, EBS, EFS, Glacier). Experience of AWS Direct Connect, Transit Gateway, and VPN solutions.

Posted 5 days ago

Apply

0 years

0 Lacs

Greater Kolkata Area

On-site

Linkedin logo

Join our Team About this opportunity: We are seeking a highly motivated and skilled Data Engineer to join our cross-functional team of Data Architects and Data Scientists. This role offers an exciting opportunity to work on large-scale data infrastructure and AI/ML pipelines, driving intelligent insights and scalable solutions across the organization. What you will do: Build, optimize, and maintain robust ETL/ELT pipelines to support AI/ML and analytics workloads. Collaborate closely with Data Scientists to productionize ML models, ensuring scalable deployment and monitoring. Design and implement cloud-based data lake and data warehouse architectures. Ensure high data quality, governance, security, and observability across data platforms. Develop and manage real-time and batch data workflows using tools like Apache Spark, Airflow, and Kafka. Support CI/CD and MLOps workflows using tools like GitHub Actions, Docker, Kubernetes, and MLflow. The skills you bring: Languages: Python, SQL, Bash Data Tools: Apache Spark, Airflow, Kafka, dbt, Pandas Cloud Platforms: AWS (preferred), Azure, or GCP Databases: Snowflake, Redshift, BigQuery, PostgreSQL, NoSQL (MongoDB/DynamoDB) DevOps/MLOps: Docker, Kubernetes, MLflow, CI/CD (e.g., GitHub Actions, Jenkins) Data Modeling: OLAP/OLTP, Star/Snowflake schema, Data Vault Why join Ericsson? At Ericsson, you´ll have an outstanding opportunity. The chance to use your skills and imagination to push the boundaries of what´s possible. To build solutions never seen before to some of the world’s toughest problems. You´ll be challenged, but you won’t be alone. You´ll be joining a team of diverse innovators, all driven to go beyond the status quo to craft what comes next. What happens once you apply? Click Here to find all you need to know about what our typical hiring process looks like. Encouraging a diverse and inclusive organization is core to our values at Ericsson, that's why we champion it in everything we do. We truly believe that by collaborating with people with different experiences we drive innovation, which is essential for our future growth. We encourage people from all backgrounds to apply and realize their full potential as part of our Ericsson team. Ericsson is proud to be an Equal Opportunity Employer. learn more. Primary country and city: India (IN) || Kolkata Req ID: 768921

Posted 5 days ago

Apply

6.0 - 10.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

About McDonald’s: One of the world’s largest employers with locations in more than 100 countries, McDonald’s Corporation has corporate opportunities in Hyderabad. Our global offices serve as dynamic innovation and operations hubs, designed to expand McDonald's global talent base and in-house expertise. Our new office in Hyderabad will bring together knowledge across business, technology, analytics, and AI, accelerating our ability to deliver impactful solutions for the business and our customers across the globe. Position Summary: Data Accessibility Engineering Support: Manager, Data Operations & Management As the Manager of Data Accessibility Engineering Support, you will play a critical role in ensuring that enterprise data is secure, discoverable, and accessible for advanced analytics, AI/ML, and operational use. You will oversee the implementation and support of data governance tooling, metadata management, and access controls across cloud-native platforms. This role is hands-on and strategic—ensuring compliance with organizational policies while enabling scalable data accessibility across GCP, AWS, BigQuery, Redshift, and other modern data environments. Who we are looking for: Primary Responsibilities: Data Accessibility & Governance Enablement: Lead the implementation and support of data accessibility solutions, ensuring efficient access to governed and trusted data assets. Oversee data governance tools and platforms (e.g., Collibra ) for metadata management, lineage, and policy enforcement. Manage and maintain technical metadata and data cataloging frameworks that support enterprise discoverability. Cloud Platform Integration: Design and implement data accessibility frameworks for GCP and AWS environments, with a strong focus on BigQuery, Redshift, and cloud-native storage layers (GCS/S3). Collaborate with cloud engineering and security teams to enforce fine-grained access controls and data classification. AI/ML Support & Lifecycle Management: Partner with AI / ML teams to support model lifecycle management through reliable access to training and scoring datasets. Ensure data quality and accessibility standards are embedded in MLOps workflows and pipelines. Data Quality, Policy & Compliance: Implement and monitor enterprise data quality frameworks to support regulatory compliance and business confidence. Develop strategies for reconciliation, validation, and data forensics to resolve data inconsistencies. Ensure alignment with organizational data usage policies, privacy standards, and auditability requirements. Cross-Functional Collaboration & Support: Work closely with data stewards, data engineers, data scientists, and compliance teams to continuously improve data operations. Provide Tier 2 / 3 support for data accessibility and metadata-related issues. Lead efforts to educate teams on data usage best practices, standards, and governance workflows. Skill: 6 to 10 years of experience in data operations, data governance, or data quality engineering roles. Hands-on experience with: Data governance platforms, especially Collibra Cloud platforms: Google Cloud Platform (GCP), Amazon Web Services (AWS) Data warehouses: BigQuery, Redshift (and / or Snowflake) Metadata management, cataloging, and data lineage tracking AI/ML data workflows and supporting structured / unstructured data access for model training and inferencing Strong analytical and problem-solving skills in large-scale, distributed data environments. Familiarity with data security, privacy regulations, and compliance standards (e.g., GDPR, CCPA). Excellent collaboration and communication skills across technical and non-technical teams. Bachelor’s or Master’s degree in Data Science, Information Systems, Computer Science, or a related field. Preferred Experience: Experience in Retail or QSR environments with complex multi-region data access needs. Exposure to enterprise data catalogs, automated data quality tooling, and access request workflows. Current GCP Associates (or Professional) Certification. Work location: Hyderabad, India Work pattern: Full time role. Work mode: Hybrid. Additional Information: McDonald’s is committed to providing qualified individuals with disabilities with reasonable accommodations to perform the essential functions of their jobs. McDonald’s provides equal employment opportunities to all employees and applicants for employment and prohibits discrimination and harassment of any type without regard to sex, sex stereotyping, pregnancy (including pregnancy, childbirth, and medical conditions related to pregnancy, childbirth, or breastfeeding), race, color, religion, ancestry or national origin, age, disability status, medical condition, marital status, sexual orientation, gender, gender identity, gender expression, transgender status, protected military or veteran status, citizenship status, genetic information, or any other characteristic protected by federal, state or local laws. This policy applies to all terms and conditions of employment, including recruiting, hiring, placement, promotion, termination, layoff, recall, transfer, leaves of absence, compensation and training. McDonald’s Capability Center India Private Limited (“McDonald’s in India”) is a proud equal opportunity employer and is committed to hiring a diverse workforce and sustaining an inclusive culture. At McDonald’s in India, employment decisions are based on merit, job requirements, and business needs, and all qualified candidates are considered for employment. McDonald’s in India does not discriminate based on race, religion, colour, age, gender, marital status, nationality, ethnic origin, sexual orientation, political affiliation, veteran status, disability status, medical history, parental status, genetic information, or any other basis protected under state or local laws. Nothing in this job posting or description should be construed as an offer or guarantee of employment.

Posted 5 days ago

Apply

4.0 - 7.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

About McDonald’s: One of the world’s largest employers with locations in more than 100 countries, McDonald’s Corporation has corporate opportunities in Hyderabad. Our global offices serve as dynamic innovation and operations hubs, designed to expand McDonald's global talent base and in-house expertise. Our new office in Hyderabad will bring together knowledge across business, technology, analytics, and AI, accelerating our ability to deliver impactful solutions for the business and our customers across the globe. Position Summary: Data Quality - Supervisor As the Data Quality Lead, you will define and drive the enterprise-wide strategy for data quality management—ensuring that data is accurate, consistent, and trustworthy across the analytics and AI/ML ecosystem. This role combines strategic leadership with technical execution, overseeing quality frameworks, tooling, and cross-functional processes that ensure data integrity across cloud-native platforms such as GCP, AWS, BigQuery, and Redshift. You will collaborate closely with data governance, engineering, and business teams to embed data quality into the core of operational and analytical systems. Who we are looking for: Primary Responsibilities: Data Quality Strategy & Leadership: Define and lead the implementation of enterprise data quality frameworks, standards, and operating models. Establish and track data quality KPIs, thresholds, and remediation SLAs across domains and business functions. Champion a data quality mindset across the organization by partnering with data owners, stewards, and executives. Cloud Platform & Data Ecosystem Integration: Design and oversee quality frameworks across GCP (BigQuery, GCS), AWS (Redshift, S3), and hybrid environments. Integrate data quality validation and monitoring into data ingestion, transformation, and AI / ML pipelines. Oversee automation and observability for data quality using tools like Airflow, Cloud Composer, and CI/CD pipelines. Tooling & Automation: Lead the evaluation, implementation, and expansion of data quality tools (e.g., Dataplex, Lightup, Collibra, or custom frameworks). Integrate data quality metrics and rules with Collibra or similar governance platforms for lineage, stewardship, and policy enforcement. Ensure quality rules are version-controlled, modular, and adaptable for new data domains. Operations & Incident Management: Oversee enterprise processes for data anomaly detection, triage, root cause analysis, and resolution. Coordinate quality audits, incident reviews, and continuous improvement programs with data engineering and analytics teams. Ensure compliance with internal standards and external regulations (e.g., GDPR, CCPA, HIPAA). Team Leadership & Collaboration: Lead a team of data quality engineers and analysts to implement and scale quality controls. Collaborate with data governance, data engineering, AI/ML, and domain owners to embed quality into product and platform development. Present strategic insights, trends, and risks to senior leadership and data councils. Skill: 4-7 years of experience in data quality, data governance, or data operations, with at least 3 years in a leadership role. Proven experience with: Data quality tooling and validation frameworks Cloud-native environments: GCP (BigQuery, GCS) and AWS (Redshift, S3) Scripting in SQL and Python for custom checks, validation jobs, and automation Data quality integration with governance platforms such as Collibra Strong analytical and problem-solving skills, particularly in diagnosing and preventing data integrity issues. Bachelor's or Master’s degree in Computer Science, Data Science, Information Systems, or a related field. Preferred Experience: Experience in Retail or Quick Service Restaurant (QSR) environments where trusted data is critical for real-time decisions. Familiarity with AI/ML pipelines and how upstream data quality affects model performance. Understanding of data mesh, domain-based ownership, and federated data quality models. Current GCP Associates (or Professional) Certification. Work location: Hyderabad, India Work pattern: Full time role. Work mode: Hybrid. Additional Information: McDonald’s is committed to providing qualified individuals with disabilities with reasonable accommodations to perform the essential functions of their jobs. McDonald’s provides equal employment opportunities to all employees and applicants for employment and prohibits discrimination and harassment of any type without regard to sex, sex stereotyping, pregnancy (including pregnancy, childbirth, and medical conditions related to pregnancy, childbirth, or breastfeeding), race, color, religion, ancestry or national origin, age, disability status, medical condition, marital status, sexual orientation, gender, gender identity, gender expression, transgender status, protected military or veteran status, citizenship status, genetic information, or any other characteristic protected by federal, state or local laws. This policy applies to all terms and conditions of employment, including recruiting, hiring, placement, promotion, termination, layoff, recall, transfer, leaves of absence, compensation and training. McDonald’s Capability Center India Private Limited (“McDonald’s in India”) is a proud equal opportunity employer and is committed to hiring a diverse workforce and sustaining an inclusive culture. At McDonald’s in India, employment decisions are based on merit, job requirements, and business needs, and all qualified candidates are considered for employment. McDonald’s in India does not discriminate based on race, religion, colour, age, gender, marital status, nationality, ethnic origin, sexual orientation, political affiliation, veteran status, disability status, medical history, parental status, genetic information, or any other basis protected under state or local laws. Nothing in this job posting or description should be construed as an offer or guarantee of employment.

Posted 6 days ago

Apply

4.0 - 7.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

About McDonald’s: One of the world’s largest employers with locations in more than 100 countries, McDonald’s Corporation has corporate opportunities in Hyderabad. Our global offices serve as dynamic innovation and operations hubs, designed to expand McDonald's global talent base and in-house expertise. Our new office in Hyderabad will bring together knowledge across business, technology, analytics, and AI, accelerating our ability to deliver impactful solutions for the business and our customers across the globe. Position Summary: Data Accessibility - Supervisor, Data Operations & Management As the Data Accessibility Lead, you will drive the enterprise-wide strategy for enabling secure, governed, and scalable access to data for AI/ML, analytics, and business operations. You will lead cross-functional teams responsible for managing data lifecycle, enforcing data quality standards, and implementing modern governance tooling such as Collibra. This role is pivotal to operationalizing data accessibility across cloud platforms like GCP and AWS, including BigQuery, Redshift, and other core data infrastructure. Who we are looking for: Primary Responsibilities: Strategic Data Accessibility Leadership: Set the strategic direction for enterprise data accessibility, ensuring consistent and secure access across teams and platforms. Lead the implementation and adoption of data governance tools (e.g., Collibra) to manage metadata, lineage, and data policies. Champion enterprise adoption of semantic and technical metadata practices for improved discoverability and data use. AI / ML Enablement: Oversee the availability, quality, and governance of data used for AI / ML model development and lifecycle management. Ensure that model training, validation, and deployment pipelines have reliable and timely access to governed datasets. Partner with MLOps, engineering, and product teams to embed data accessibility standards in model workflows. Cloud Platform Integration: Oversee data accessibility initiatives in GCP and AWS, including integration with BigQuery, Redshift, and cloud-native storage. Develop strategies for managing access controls, encryption, and auditability of data assets across cloud environments. Data Governance & Quality Oversight: Define and enforce enterprise data quality standards, including data profiling, validation, and exception handling workflows. Ensure compliance with internal data policies and external regulations (e.g., GDPR, HIPAA, CCPA). Lead enterprise initiatives around data lifecycle management, from ingestion and processing to archival and retention. Cross-Functional Collaboration & Leadership: Lead and mentor a team of data operations professionals and collaborate with data engineering, governance, AI/ML, and compliance teams. Provide executive-level insights and recommendations for improving enterprise data accessibility, quality, and governance practices. Drive alignment between business units, technical teams, and compliance functions through effective data stewardship. Skill: 4 to 7 years of experience in data operations, data governance, or data quality management, with at least 3 years in a strategic leadership capacity. Strong hands-on and strategic experience with: Collibra or similar data governance platforms Cloud platforms: Google Cloud Platform (GCP), Amazon Web Services (AWS) Enterprise data warehouses such as Big Query, Redshift, or Snowflake AI/ML model lifecycle support and MLOps integration Data quality frameworks, metadata management, and data access policy enforcement SQL Strong analytical and problem-solving skills; ability to work across highly matrixed, global organizations. Exceptional communication, leadership, and stakeholder management skills. Bachelor’s or Master’s degree in Data Science, Information Systems, or a related field. Preferred Experience: Experience in Retail or Quick Service Restaurant (QSR) environments with operational and real-time analytics needs. Familiarity with data mesh concepts, data product ownership, and domain-based accessibility strategies. Experience navigating privacy, residency, or regulatory compliance in global data environments. Current GCP Associates (or Professional) Certification. Work location: Hyderabad, India Work pattern: Full time role. Work mode: Hybrid. Additional Information: McDonald’s is committed to providing qualified individuals with disabilities with reasonable accommodations to perform the essential functions of their jobs. McDonald’s provides equal employment opportunities to all employees and applicants for employment and prohibits discrimination and harassment of any type without regard to sex, sex stereotyping, pregnancy (including pregnancy, childbirth, and medical conditions related to pregnancy, childbirth, or breastfeeding), race, color, religion, ancestry or national origin, age, disability status, medical condition, marital status, sexual orientation, gender, gender identity, gender expression, transgender status, protected military or veteran status, citizenship status, genetic information, or any other characteristic protected by federal, state or local laws. This policy applies to all terms and conditions of employment, including recruiting, hiring, placement, promotion, termination, layoff, recall, transfer, leaves of absence, compensation and training. McDonald’s Capability Center India Private Limited (“McDonald’s in India”) is a proud equal opportunity employer and is committed to hiring a diverse workforce and sustaining an inclusive culture. At McDonald’s in India, employment decisions are based on merit, job requirements, and business needs, and all qualified candidates are considered for employment. McDonald’s in India does not discriminate based on race, religion, colour, age, gender, marital status, nationality, ethnic origin, sexual orientation, political affiliation, veteran status, disability status, medical history, parental status, genetic information, or any other basis protected under state or local laws. Nothing in this job posting or description should be construed as an offer or guarantee of employment.

Posted 6 days ago

Apply

5.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

About McDonald’s: One of the world’s largest employers with locations in more than 100 countries, McDonald’s Corporation has corporate opportunities in Hyderabad. Our global offices serve as dynamic innovation and operations hubs, designed to expand McDonald's global talent base and in-house expertise. Our new office in Hyderabad will bring together knowledge across business, technology, analytics, and AI, accelerating our ability to deliver impactful solutions for the business and our customers across the globe. Position Summary: Data Quality Engineering Support: (Manager, Data Operations & Management) As the Data Quality Engineering Support Manager, you will be responsible for implementing, scaling, and supporting enterprise-wide data quality frameworks across cloud-native data platforms. You will drive hands-on initiatives to monitor, validate, and reconcile data across ingestion, processing, and consumption layers—enabling trusted data for analytics, AI/ML, and operational workflows. This role requires a blend of technical depth, platform expertise (e.g., BigQuery, Redshift, GCP, AWS), and collaboration across engineering, governance, and product teams. Who we are looking for: Primary Responsibilities: Data Quality Engineering & Monitoring: Implement and support automated data quality checks for accuracy, completeness, timeliness, and consistency across datasets. Develop validation frameworks for ingestion pipelines, curated layers, and reporting models in platforms like BigQuery and Redshift. Integrate data quality controls into CI/CD pipelines and orchestration tools (e.g., Airflow, Cloud Composer). Data Quality Operations: Respond to and resolve data quality incidents and discrepancies across data domains and systems. Collaborate with engineering and product teams to implement root cause analysis and build long-term remediation strategies. Establish SLAs and alerting thresholds for data quality KPIs. Cloud Platform Integration Deploy scalable data quality monitoring solutions across GCP (BigQuery, Cloud Storage) and AWS (Redshift, S3, Glue). Leverage platform-native services and third-party tools for automated profiling, rule enforcement, and anomaly detection. Governance Alignment: Partner with data governance teams to align quality rules with business glossary terms, reference data, and stewardship models. Integrate quality metadata into governance tools such as Collibra, enabling lineage and audit tracking. Documentation, Enablement & Collaboration Maintain playbooks, documentation, and automated reporting for quality audits and exception handling. Collaborate with data owners, analysts, and data product teams to promote a culture of data trust and shared ownership. Provide training and knowledge-sharing to enable self-service quality monitoring and issue triaging. Skill: 5+ years of experience in data quality engineering, data operations, or data pipeline support, ideally in a cloud-first environment. Hands-on expertise in: Building and managing data quality checks (e.g., null checks, duplicates, data drift, schema mismatches) SQL and Python for quality validation and automation Working with cloud-native data stacks: BigQuery, Redshift, GCS, S3 Data quality monitoring tools or frameworks (e.g., Dataplex, Lightup, Collibra) Strong troubleshooting skills across distributed data systems and pipelines. Bachelor’s degree in Computer Science, Information Systems, or related field. Preferred Experience: Experience in Retail or Quick Service Restaurant (QSR) environments with high-volume data ingestion and reporting requirements. Familiarity with data governance platforms (e.g., Collibra) and integration of quality rules into metadata models. Exposure to AI/ML data pipelines and the impact of data quality on model performance. Current GCP Associates (or Professional) Certification. Work location: Hyderabad, India Work hours: Work pattern: Full time role. Work mode: Hybrid. Additional Information: McDonald’s is committed to providing qualified individuals with disabilities with reasonable[LG1] accommodations to perform the essential functions of their jobs. McDonald’s provides equal employment opportunities to all employees and applicants for employment and prohibits discrimination and harassment of any type without regard to sex, sex stereotyping, pregnancy (including pregnancy, childbirth, and medical conditions related to pregnancy, childbirth, or breastfeeding), race, color, religion, ancestry or national origin, age, disability status, medical condition, marital status, sexual orientation, gender, gender identity, gender expression, transgender status, protected military or veteran status, citizenship status, genetic information, or any other characteristic protected by federal, state or local laws. This policy applies to all terms and conditions of employment, including recruiting, hiring, placement, promotion, termination, layoff, recall, transfer, leaves of absence, compensation and training. McDonald’s Capability Center India Private Limited (“McDonald’s in India”) is a proud equal opportunity employer and is committed to hiring a diverse workforce and sustaining an inclusive culture. At McDonald’s in India, employment decisions are based on merit, job requirements, and business needs, and all qualified candidates are considered for employment. McDonald’s in India does not discriminate based on race, religion, colour, age, gender, marital status, nationality, ethnic origin, sexual orientation, political affiliation, veteran status, disability status, medical history, parental status, genetic information, or any other basis protected under state or local laws. Nothing in this job posting or description should be construed as an offer or guarantee of employment.

Posted 6 days ago

Apply

4.0 - 7.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

About McDonald’s: One of the world’s largest employers with locations in more than 100 countries, McDonald’s Corporation has corporate opportunities in Hyderabad. Our global offices serve as dynamic innovation and operations hubs, designed to expand McDonald's global talent base and in-house expertise. Our new office in Hyderabad will bring together knowledge across business, technology, analytics, and AI, accelerating our ability to deliver impactful solutions for the business and our customers across the globe. Position Summary: Data Accessibility , Data Governance Tooling & Lifecycle Mgmt Engineering Support As the Data Accessibility Lead, you will drive the enterprise-wide strategy for enabling secure, governed, and scalable access to data for AI/ML, analytics, and business operations. You will lead cross-functional teams responsible for managing data lifecycle, enforcing data quality standards, and implementing modern governance tooling such as Collibra. This role is pivotal to operationalizing data accessibility across cloud platforms like GCP and AWS, including BigQuery, Redshift, and other core data infrastructure. Who we are looking for: Primary Responsibilities: Strategic Data Accessibility Leadership: Set the strategic direction for enterprise data accessibility, ensuring consistent and secure access across teams and platforms. Lead the implementation and adoption of data governance tools (e.g., Collibra) to manage metadata, lineage, and data policies. Champion enterprise adoption of semantic and technical metadata practices for improved discoverability and data use. AI/ML Enablement: Oversee the availability, quality, and governance of data used for AI / ML model development and lifecycle management. Ensure that model training, validation, and deployment pipelines have reliable and timely access to governed datasets. Partner with MLOps, engineering, and product teams to embed data accessibility standards in model workflows. Cloud Platform Integration: Oversee data accessibility initiatives in GCP and AWS, including integration with BigQuery, Redshift, and cloud-native storage. Develop strategies for managing access controls, encryption, and auditability of data assets across cloud environments. Data Governance & Quality Oversight: Define and enforce enterprise data quality standards, including data profiling, validation, and exception handling workflows. Ensure compliance with internal data policies and external regulations (e.g., GDPR, HIPAA, CCPA). Lead enterprise initiatives around data lifecycle management, from ingestion and processing to archival and retention. Cross-Functional Collaboration & Leadership: Lead and mentor a team of data operations professionals and collaborate with data engineering, governance, AI / ML, and compliance teams. Provide executive-level insights and recommendations for improving enterprise data accessibility, quality, and governance practices. Drive alignment between business units, technical teams, and compliance functions through effective data stewardship. Skill: 4 to 7 years of experience in data operations, data governance, or data quality management, with at least 3 years in a strategic leadership capacity. Strong hands-on and strategic experience with: Collibra or similar data governance platforms Cloud platforms: Google Cloud Platform (GCP), Amazon Web Services (AWS) Enterprise data warehouses such as Big Query, Redshift, or Snowflake AI / ML model lifecycle support and MLOps integration Data quality frameworks, metadata management, and data access policy enforcement SQL Strong analytical and problem-solving skills; ability to work across highly matrixed, global organizations. Exceptional communication, leadership, and stakeholder management skills. Bachelor’s or Master’s degree in Data Science, Information Systems, or a related field. Preferred Experience: Experience in Retail or Quick Service Restaurant (QSR) environments with operational and real-time analytics needs. Familiarity with data mesh concepts, data product ownership, and domain-based accessibility strategies. Experience navigating privacy, residency, or regulatory compliance in global data environments. Current GCP Associates (or Professional) Certification. Work location: Hyderabad, India Work pattern: Full time role. Work mode: Hybrid. Additional Information: McDonald’s is committed to providing qualified individuals with disabilities with reasonable accommodations to perform the essential functions of their jobs. McDonald’s provides equal employment opportunities to all employees and applicants for employment and prohibits discrimination and harassment of any type without regard to sex, sex stereotyping, pregnancy (including pregnancy, childbirth, and medical conditions related to pregnancy, childbirth, or breastfeeding), race, color, religion, ancestry or national origin, age, disability status, medical condition, marital status, sexual orientation, gender, gender identity, gender expression, transgender status, protected military or veteran status, citizenship status, genetic information, or any other characteristic protected by federal, state or local laws. This policy applies to all terms and conditions of employment, including recruiting, hiring, placement, promotion, termination, layoff, recall, transfer, leaves of absence, compensation and training. McDonald’s Capability Center India Private Limited (“McDonald’s in India”) is a proud equal opportunity employer and is committed to hiring a diverse workforce and sustaining an inclusive culture. At McDonald’s in India, employment decisions are based on merit, job requirements, and business needs, and all qualified candidates are considered for employment. McDonald’s in India does not discriminate based on race, religion, colour, age, gender, marital status, nationality, ethnic origin, sexual orientation, political affiliation, veteran status, disability status, medical history, parental status, genetic information, or any other basis protected under state or local laws. Nothing in this job posting or description should be construed as an offer or guarantee of employment

Posted 6 days ago

Apply

15.0 years

3 - 10 Lacs

Hyderābād

On-site

GlassDoor logo

Job Information Date Opened 06/23/2025 Job Type Full time Industry IT Services City Hyderabad State/Province Telangana Country India Zip/Postal Code 500081 About Us About DATAECONOMY: We are a fast-growing data & analytics company headquartered in Dublin with offices inDublin, OH, Providence, RI, and an advanced technology center in Hyderabad,India. We are clearly differentiated in the data & analytics space via our suite of solutions, accelerators, frameworks, and thought leadership. Job Description We are seeking an experienced Data Solution Architect to lead the design and implementation of scalable, secure, and high-performing data solutions across cloud and hybrid environments. The ideal candidate will bring deep expertise in Data Engineering, APIs, Python, Spark/PySpark , and enterprise cloud platforms such as AWS and Azure . This is a strategic, client-facing role that involves working closely with stakeholders, engineering teams, and business leaders to architect and deliver robust data platforms. Key Responsibilities: Architect end-to-end data solutions across cloud (AWS/Azure) and on-premises environments Develop and integrate RESTful APIs for data ingestion, transformation, and distribution Define data architecture standards, best practices, and governance frameworks Work with DevOps and cloud teams to deploy solutions using CI/CD and infrastructure-as-code Guide and mentor data engineering teams in solution implementation and performance optimization Ensure high availability, scalability, and data security compliance across platforms Collaborate with product owners and stakeholders to translate business needs into technical specifications Conduct architecture reviews, risk assessments, and solution validation Requirements Required Skills & Experience: 15 to 22 years of total experience in IT, with at least 5+ years in data architecture roles Strong experience in data processing frameworks and building the ETL solutions Proven expertise in designing and deploying solutions on AWS and Azure cloud platforms Hands-on experience with data integration, real-time streaming , and API-based data access Proficient in data modeling (structured, semi-structured, unstructured data) Deep understanding of data lakes, data warehouses, and modern data mesh/architecture patterns Experience with tools such as Airflow, Glue, Data bricks, Synapse, Redshift, or similar Knowledge of security, compliance, and governance practices in large-scale data platforms Strong communication, leadership, and client-facing skills Benefits Standard Company Benefits

Posted 6 days ago

Apply

0 years

0 Lacs

India

On-site

GlassDoor logo

Location: IN - Hyderabad Telangana Goodyear Talent Acquisition Representative: Maria Monica Canding Sponsorship Available: No Relocation Assistance Available: No Job Responsibilities: You are responsible for designing and building data products, legal data layers, data streams, algorithms, and reporting systems (e.g., dashboards, front ends). You ensure the correct design of solutions, performance, and scalability while considering appropriate cost control. You link data product design with DevOps and infrastructure. You act as a reference within and outside the Analytics team. You serve as a technical partner to Data Engineers regarding digital product implementation. Qualifications: You have a Bachelor’s degree in Computer Science, Engineering, Management Information Systems, or a related discipline, or you have 10 or more years of experience in Information Technology in lieu of a degree. You have 5 or more years of experience in Information Technology. You have an in-depth understanding of database structure principles. You have experience gathering and analyzing system requirements. You have knowledge of data mining and segmentation techniques. You have expertise in SQL and Oracle. You are familiar with data visualization tools (e.g., Tableau, Cognos, SAP Analytics Cloud). You possess proven analytical skills and a problem-solving attitude. You have a proven ability to work with distributed systems. You are able to develop creative solutions to problems. You have knowledge and strong skills with SQL and NoSQL databases and applications, such as Teradata, Redshift, MongoDB, or equivalent. Goodyear is an Equal Employment Opportunity and Affirmative Action Employer. All qualified applicants will receive consideration for employment without regard to that individual's race, color, religion or creed, national origin or ancestry, sex (including pregnancy), sexual orientation, gender identity, age, physical or mental disability, ethnicity, citizenship, or any other characteristic protected by law. Goodyear is one of the world’s largest tire companies. It employs about 74,000 people and manufactures its products in 57 facilities in 23 countries around the world. Its two Innovation Centers in Akron, Ohio and Colmar-Berg, Luxembourg strive to develop state-of-the-art products and services that set the technology and performance standard for the industry. For more information about Goodyear and its products, go to www.goodyear.com/corporate

Posted 6 days ago

Apply

0 years

10 - 35 Lacs

Hyderābād

On-site

GlassDoor logo

year of IT experiences with deep expertise in S3, Redshift, Aurora, Glue and Lambda services. Atleast one instance of proven experience in developing Data platform end to end using AWS Hands-on programming experience with Data Frames, Python, and unit testing the python as well as Glue code. Experience in orchestrating mechanisms like Airflow, Step functions etc. Experience working on AWS redshift is Mandatory. Must have experience writing stored procedures, understanding of Redshift data API and writing federated queries Experience in Redshift performance tunning.Good in communication and problem solving. Very good stakeholder communication and management Job Types: Full-time, Permanent Pay: ₹1,068,726.69 - ₹3,529,919.70 per year Benefits: Health insurance Internet reimbursement Paid sick time Provident Fund Location Type: In-person Schedule: Day shift Fixed shift Monday to Friday Work Location: In person

Posted 6 days ago

Apply

6.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

Sr. AWS Data Engineer Responsibilities Lead the architectural design and development of a scalable, reliable, and flexible metadata-driven data ingestion and extraction framework on AWS using Python/PySpark. Design and implement a customizable data processing framework using Python/PySpark. This framework should be capable of handling diverse scenarios and evolving data processing requirements. Implement data pipeline for data Ingestion, transformation and extraction leveraging the AWS Cloud Services Seamlessly integrate a variety of AWS services, including S3,Glue, Kafka, Lambda, SQL, SNS, Athena, EC2, RDS (Oracle, Postgres, MySQL), AWS Crawler to construct a highly scalable and reliable data ingestion and extraction pipeline. Facilitate configuration and extensibility of the framework to adapt to evolving data needs and processing scenarios. Develop and maintain rigorous data quality checks and validation processes to safeguard the integrity of ingested data. Implement robust error handling, logging, monitoring, and alerting mechanisms to ensure the reliability of the entire data pipeline. Qualifications Must Have: Over 6 years of hands-on experience in data engineering, with a proven focus on data ingestion and extraction using Python/PySpark. Extensive AWS experience is mandatory, with proficiency in Glue, Lambda, SQS, SNS, AWS IAM, AWS Step Functions, S3, and RDS (Oracle, Aurora Postgres). 4+ years of experience working with both relational and non-relational/NoSQL databases is required. Strong SQL experience is necessary, demonstrating the ability to write complex queries from scratch. Strong working experience in Redshift is required along with other SQL DB experience. Strong scripting experience with the ability to build intricate data pipelines using AWS serverless architecture. Complete understanding of building an end-to end Data pipeline. Nice To Have Strong understanding of Kinesis, Kafka, CDK. A strong understanding of data concepts related to data warehousing, business intelligence (BI), data security, data quality, and data profiling is required. Experience in Node Js and CDK.

Posted 6 days ago

Apply

2.0 - 7.0 years

20 - 25 Lacs

Bengaluru

Work from Office

Naukri logo

Experience : 2+ years Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Hybrid (Bengaluru) Must have skills required: AWS, Go Lang, Python Requirements : We are looking for a Backend Engineer to help us through the next level of technology changes needed to revolutionize Healthcare for India. We are seeking individuals who can understand real-world scenarios and come up with scalable tech solutions for millions of patients to make healthcare accessible. The role comes with a good set of challenges to solve, and offers an opportunity to build new systems that will be rolled out at scale. You have 2 to 4 years or more of software development experience with expertise in designing and implementing high-performance web applications. Very strong understanding and experience with any of Java, Scala, GoLang, Python. Experience writing optimized queries in relational databases like Mysql, redshift / Postgres. You have exposure to basic data engineering concepts like data pipeline, hadoop or spark Write clean and testable code. You love to build platforms that enable other teams to build on top of. Some of challenges we solve include: Clinical decision support Early Detection: Digitally assist doctors in identifying high-risk patients for early intervention Track & Advice: Analyze patients vitals/test values across visits to assist doctors in personalizing chronic care. Risk Prevention: Assist doctors in monitoring the progression of chronic disease by drawing attention to additional symptoms and side effects. EMR (Electronic Medical Records): Clinical software to write prescriptions and manage clinical records AI-powered features Adapts to doctors practice: Learns from doctors prescribing preferences and provides relevant auto-fill recommendations for faster prescriptions. Longitudinal patient journey: AI analyses the longitudinal journey of patients to assist doctors in early detection. Medical language processing: AI-driven automatic digitization of printed prescriptions and test reports. Core platform Pharma advertising platform to doctors at the moment of truth Real world evidence to generate market insights for B2B consumption Virtual Store Online Pharmacy+ Diagnostic solutions helping patients with one-click order Technologies we use : Distributed Tech: Kafka, Elastic search Databases: MongoDB, RDS Cloud platform: AWS Languages: Go-lang, python, PHP UI Tech: React, react native Caching: Redis Big Data: AWS Athena, Redshift APM: NewRelic Responsibilities : Develop well testable and reusable services with structured, granular and well-commented code. Contribute in the area of API building, data pipeline setup, and new tech initiatives needed for a core platform Acclimate to new technologies and situations as per the company demands and requirements with the vision of providing best customer experience Meet expected deliverables and quality standards with every release Collaborate with teams to design, develop, test and refine deliverables that meet the objectives Perform code reviews and implement improvement plans Additional Responsibilities : Pitch-in during the phases of design and architectural solutions of Business problems. Organize, lead and motivate the development team to meet expected timelines and quality standards across releases. Actively contribute to development process improvement plans. Assist peers by code reviews and juniors through mentoring. Must have Skills : Sound understanding of Computer Science fundamentals including Data Structures and Space and Time complexity. Excellent problem solving skills Solid understanding of any of the modern Object oriented programming languages (like Java, Ruby or Python) and or Functional languages (like Scala, GoLang) Understanding of MPP (Massive parallel processing) and frameworks like Spark Experience working with Databases (RDBMS - Mysql, Redshift etc, NoSQL - Couchbase / MongoDB / Cassandra etc). Experience working with open source libraries and frameworks. Strong hold on versioning tools Git/Bitbucket. Good to have Skills : Knowledge of MicroServices architecture. Have experience working with Kafka Experience or Exposure to ORM frameworks (like ActiveRecord, SQLAlchemy etc). Working knowledge of full text search (like ElasticSearch, Solr etc). Skills AWS, Go Lang, Python

Posted 6 days ago

Apply

0 years

6 - 8 Lacs

Calcutta

On-site

GlassDoor logo

Kolkata,West Bengal,India Job ID 768921 Join our Team About this opportunity: We are seeking a highly motivated and skilled Data Engineer to join our cross-functional team of Data Architects and Data Scientists. This role offers an exciting opportunity to work on large-scale data infrastructure and AI/ML pipelines, driving intelligent insights and scalable solutions across the organization. What you will do: Build, optimize, and maintain robust ETL/ELT pipelines to support AI/ML and analytics workloads. Collaborate closely with Data Scientists to productionize ML models, ensuring scalable deployment and monitoring. Design and implement cloud-based data lake and data warehouse architectures. Ensure high data quality, governance, security, and observability across data platforms. Develop and manage real-time and batch data workflows using tools like Apache Spark, Airflow, and Kafka. Support CI/CD and MLOps workflows using tools like GitHub Actions, Docker, Kubernetes, and MLflow. The skills you bring: Languages: Python, SQL, Bash Data Tools: Apache Spark, Airflow, Kafka, dbt, Pandas Cloud Platforms: AWS (preferred), Azure, or GCP Databases: Snowflake, Redshift, BigQuery, PostgreSQL, NoSQL (MongoDB/DynamoDB) DevOps/MLOps: Docker, Kubernetes, MLflow, CI/CD (e.g., GitHub Actions, Jenkins) Data Modeling: OLAP/OLTP, Star/Snowflake schema, Data Vault Why join Ericsson? At Ericsson, you´ll have an outstanding opportunity. The chance to use your skills and imagination to push the boundaries of what´s possible. To build solutions never seen before to some of the world’s toughest problems. You´ll be challenged, but you won’t be alone. You´ll be joining a team of diverse innovators, all driven to go beyond the status quo to craft what comes next. What happens once you apply?

Posted 6 days ago

Apply

3.0 - 5.0 years

5 - 7 Lacs

Jaipur

On-site

GlassDoor logo

Job Summary Auriga IT is seeking a proactive, problem-solving Data Analyst with 3–5 years of experience owning end-to-end data pipelines. You’ll partner with stakeholders across engineering, product, marketing, and finance to turn raw data into actionable insights that drive business decisions. You must be fluent in the core libraries, tools, and cloud services listed below. Your Responsibilities: Pipeline Management Design, build, and maintain ETL/ELT workflows using orchestration frameworks (e.g., Airflow, dbt). Exploratory Data Analysis & Visualization Perform EDA and statistical analysis using Python or R . Prototype and deliver interactive charts and dashboards. BI & Reporting Develop dashboards and scheduled reports to surface KPIs and trends. Configure real-time alerts for data anomalies or thresholds. Insights Delivery & Storytelling Translate complex analyses (A/B tests, forecasting, cohort analysis) into clear recommendations. Present findings to both technical and non-technical audiences. Collaboration & Governance Work cross-functionally to define data requirements, ensure quality, and maintain governance. Mentor junior team members on best practices in code, version control, and documentation. Key Skills: You must know at least one technology from each category below: Data Manipulation & Analysis Python: pandas, NumPy R: tidyverse (dplyr, tidyr) Visualization & Dashboarding Python: matplotlib, seaborn, Plotly R: ggplot2, Shiny BI Platforms Commercial or Open-Source (e.g., Tableau, Power BI, Apache Superset, Grafana) ETL/ELT Orchestration Apache Airflow, dbt, or equivalent Cloud Data Services AWS (Redshift, Athena, QuickSight) GCP (BigQuery, Data Studio) Azure (Synapse, Data Explorer) Databases & Querying RDBMS Strong SQL Skill (PostgreSQL, MySQL, Snowflake) Decent Knowledge of NoSQL databases Additionally : Bachelor’s or Master’s in a quantitative field (Statistics, CS, Economics, etc.). 3–5 years in a data analyst (or similar) role with end-to-end pipeline ownership. Strong problem-solving mindset and excellent communication skills. Certification of power BI, Tableau is a plus Desired Skills & Attributes Familiarity with version control (Git) and CI/CD for analytics code. Exposure to basic machine-learning workflows (scikit-learn, caret). Comfortable working in Agile/Scrum environments and collaborating across domains. About Company Hi there! We are Auriga IT. We power businesses across the globe through digital experiences, data and insights. From the apps we design to the platforms we engineer, we're driven by an ambition to create world-class digital solutions and make an impact. Our team has been part of building the solutions for the likes of Zomato, Yes Bank, Tata Motors, Amazon, Snapdeal, Ola, Practo, Vodafone, Meesho, Volkswagen, Droom and many more. We are a group of people who just could not leave our college-life behind and the inception of Auriga was solely based on a desire to keep working together with friends and enjoying the extended college life. Who Has not Dreamt of Working with Friends for a Lifetime Come Join In https://www.aurigait.com/ -https://aurigait.com/https://aurigait.com

Posted 6 days ago

Apply

9.0 years

0 Lacs

Andhra Pradesh, India

On-site

Linkedin logo

Must have 9+ years of experience in below mentioned skills. Must Have: Big Data Concepts Python(Core Python- Able to write code), SQL, Shell Scripting, AWS S3 Good to Have: Event-driven/AWA SQS, Microservices, API Development,Kafka, Kubernetes, Argo, Amazon Redshift, Amazon Aurora

Posted 6 days ago

Apply

5.0 years

0 Lacs

Gurgaon, Haryana, India

On-site

Linkedin logo

Job Title: Senior Data Engineer Job Summary We are looking for an experienced Senior Data Engineer with 5+ years of hands-on experience in cloud data engineering platforms, specifically AWS, Databricks, and Azure. The ideal candidate will play a critical role in designing, building, and maintaining scalable data pipelines and infrastructure to support our analytics and business intelligence initiatives. Key Responsibilities Design, develop, and optimize scalable data pipelines using AWS services (e.g., S3, Glue, Redshift, Lambda). Build and maintain ETL/ELT workflows leveraging Databricks and Apache Spark for processing large datasets. Work extensively with Azure data services such as Azure Data Lake, Azure Synapse, Azure Data Factory, and Azure Databricks. Collaborate with data scientists, analysts, and stakeholders to understand data requirements and deliver high-quality data solutions. Ensure data quality, reliability, and security across multiple cloud platforms. Monitor and troubleshoot data pipelines, implement performance tuning, and optimize resource usage. Implement best practices for data governance, metadata management, and documentation. Stay current with emerging cloud data technologies and industry trends to recommend improvements. Required Qualifications 5+ years of experience in data engineering with strong expertise in AWS, Databricks, and Azure cloud platforms. Hands-on experience with big data processing frameworks, particularly Apache Spark. Proficient in building complex ETL/ELT pipelines and managing data workflows. Strong programming skills in Python, Scala, or Java. Experience working with structured and unstructured data in cloud storage solutions. Knowledge of SQL and experience with relational and NoSQL databases. Familiarity with CI/CD pipelines and DevOps practices in cloud environments. Strong analytical and problem-solving skills with an ability to work independently and in teams. Preferred Skills Experience with containerization and orchestration tools (Docker, Kubernetes). Familiarity with machine learning pipelines and tools. Knowledge of data modeling, data warehousing, and analytics architecture.

Posted 6 days ago

Apply

6.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

Job Title: Python Databricks Developer Location: Pune Experience: 6 to 10 Years Employment Type: Full-Time Job Summary: We are looking for a skilled and experienced Python Databricks Developer who is proficient in developing scalable data pipelines, data transformation logic, and cloud-native analytics solutions using Python, Databricks , and AWS services. The ideal candidate should have strong data engineering experience and be comfortable working in fast-paced, Agile environments. Key Responsibilities: Design and develop scalable ETL pipelines and data workflows using Databricks (PySpark) and Python . Work on large-scale data ingestion, processing, and transformation from various sources. Leverage AWS services (e.g., S3, Glue, Lambda, Redshift, EMR) for data storage, orchestration, and compute. Optimize performance of Spark jobs and Databricks notebooks for large-scale data operations. Collaborate with data architects, analysts, and business stakeholders to deliver robust data solutions. Implement best practices for data quality, data governance, and security. Participate in code reviews, testing, and deployment of data solutions in DevOps-driven environments. Create and maintain technical documentation, data dictionaries, and process flows. Required Skills & Experience: Strong hands-on programming experience in Python . Minimum 4+ years of experience working with Databricks , especially with PySpark and Delta Lake . Experience in building and managing data pipelines and ETL processes in cloud environments , particularly AWS . Solid understanding of distributed computing concepts and Spark performance optimization. Hands-on experience with AWS services such as S3, Glue, Lambda, Redshift, Athena, CloudWatch , etc. Experience with version control (e.g., Git), CI/CD tools, and workflow orchestration tools like Airflow or Databricks Jobs . Knowledge of data modeling, data warehousing, and data lake architectures. Strong problem-solving skills and the ability to work independently or in a team. Preferred Qualifications: Bachelor's or Master’s degree in Computer Science, Information Technology, or related field. Certification in AWS or Databricks is a plus. Experience working in Agile environments with Jira or similar tools. Familiarity with SQL and NoSQL databases is a plus.

Posted 6 days ago

Apply

5.0 - 6.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

Responsibilities / Qualifications Candidate must have 5-6 years of IT working experience with at least 3 years of experience on AWS Cloud environment is preferred Ability to understand the existing system architecture and work towards the target architecture. Experience with data profiling activities, discover data quality challenges and document it. Experience with development and implementation of large-scale Data Lake and data analytics platform with AWS Cloud platform. Develop and unit test Data pipeline architecture for data ingestion processes using AWS native services. Experience with development on AWS Cloud using AWS data stores such as Redshift, RDS, S3, Glue Data Catalog, Lake formation, Apache Airflow, Lambda, etc Experience with development of data governance framework including the management of data, operating model, data policies and standards. Experience with orchestration of workflows in an enterprise environment. Working experience with Agile Methodology Experience working with source code management tools such as AWS Code Commit or GitHub Experience working with Jenkins or any CI/CD Pipelines using AWS Services Experience working with an on-shore / off-shore model and collaboratively work on deliverables. Good communication skills to interact with onshore team.

Posted 6 days ago

Apply

7.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Linkedin logo

Job Title: Data Engineer (Dataiku) Location: Bangalore, Chennai, Noida, Hyderabad Experience: 7-10 Years Job Summary We are seeking an experienced Data Engineer with strong expertise in Dataiku , AWS cloud technologies , and Large Language Models (LLMs) . The ideal candidate will play a key role in building scalable data pipelines, deploying ML/LLM-based solutions, and driving analytics initiatives across the organization using modern data engineering practices. Required Skills & Experience 5+ years of experience in data engineering or related roles. Hands-on experience with Dataiku DSS (Data Science Studio) for building data pipelines and analytical workflows. Good understanding of LLMs, prompt engineering, and generative AI use cases. Strong command of AWS services – S3, Lambda, Glue, Redshift, SageMaker, etc. Proficient in Python, SQL, and data transformation techniques. Experience with REST APIs, CI/CD pipelines, and containerization (Docker, Kubernetes) is a plus. Familiarity with data modeling, ETL best practices, and cloud security standards.

Posted 6 days ago

Apply

5.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

Role: Data Engineer with (GCP or Azure or AWS) Experience: 5 + years Location: Bangalore | Gurugram | Pune Notice: Immediate Joiners Mode: Hybrid JD Skills (must have) Strong programming skills in languages such as Python, PySpark, SQL etc. Experience in Build and optimize ETL workflows using tools/technologies such as Spark, Snowflake, Airflow and/or Azure Data factory, Glue, Redshift etc. Craft and optimize complex SQL queries and stored procedures for data transformation, aggregation, and analysis. Develop and maintain data models ensuring scalability and optimal performance. Utilize Snowpark for data processing within the Snowflake platform. Integrate Snowflake for efficient data storage and retrieval. Exposure to API integrations to facilitate data workflows. Experience in implementing CI-CD pipelines through DevOps platforms. Good experience in cloud infrastructure such as Azure, AWS or GCP Bachelor's or master’s degree in computer science, Engineering, or a related field. Skills (good to have) Good to have experience in Docker, Kubernetes etc Exposure in HTML, CSS, Javascript/JQuery, Node.js, Angular/React Experience in API development, Flask/Django is a bonus

Posted 6 days ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies