Home
Jobs

2213 Redshift Jobs

Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
Filter
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

5.0 years

0 Lacs

Delhi

On-site

GlassDoor logo

The Role Context: This is an exciting opportunity to join a dynamic and growing organization, working at the forefront of technology trends and developments in social impact sector. Wadhwani Center for Government Digital Transformation (WGDT) works with the government ministries and state departments in India with a mission of “ Enabling digital transformation to enhance the impact of government policy, initiatives and programs ”. We are seeking a highly motivated and detail-oriented individual to join our team as a Data Engineer with experience in the designing, constructing, and maintaining the architecture and infrastructure necessary for data generation, storage and processing and contribute to the successful implementation of digital government policies and programs. You will play a key role in developing, robust, scalable, and efficient systems to manage large volumes of data, make it accessible for analysis and decision-making and driving innovation & optimizing operations across various government ministries and state departments in India. Key Responsibilities: a. Data Architecture Design : Design, develop, and maintain scalable data pipelines and infrastructure for ingesting, processing, storing, and analyzing large volumes of data efficiently. This involves understanding business requirements and translating them into technical solutions. b. Data Integration: Integrate data from various sources such as databases, APIs, streaming platforms, and third-party systems. Should ensure the data is collected reliably and efficiently, maintaining data quality and integrity throughout the process as per the Ministries/government data standards. c. Data Modeling: Design and implement data models to organize and structure data for efficient storage and retrieval. They use techniques such as dimensional modeling, normalization, and denormalization depending on the specific requirements of the project. d. Data Pipeline Development/ ETL (Extract, Transform, Load): Develop data pipeline/ETL processes to extract data from source systems, transform it into the desired format, and load it into the target data systems. This involves writing scripts or using ETL tools or building data pipelines to automate the process and ensure data accuracy and consistency. e. Data Quality and Governance: Implement data quality checks and data governance policies to ensure data accuracy, consistency, and compliance with regulations. Should be able to design and track data lineage, data stewardship, metadata management, building business glossary etc. f. Data lakes or Warehousing: Design and maintain data lakes and data warehouse to store and manage structured data from relational databases, semi-structured data like JSON or XML, and unstructured data such as text documents, images, and videos at any scale. Should be able to integrate with big data processing frameworks such as Apache Hadoop, Apache Spark, and Apache Flink, as well as with machine learning and data visualization tools. g. Data Security : Implement security practices, technologies, and policies designed to protect data from unauthorized access, alteration, or destruction throughout its lifecycle. It should include data access, encryption, data masking and anonymization, data loss prevention, compliance, and regulatory requirements such as DPDP, GDPR, etc. h. Database Management: Administer and optimize databases, both relational and NoSQL, to manage large volumes of data effectively. i. Data Migration: Plan and execute data migration projects to transfer data between systems while ensuring data consistency and minimal downtime. a. Performance Optimization : Optimize data pipelines and queries for performance and scalability. Identify and resolve bottlenecks, tune database configurations, and implement caching and indexing strategies to improve data processing speed and efficiency. b. Collaboration: Collaborate with data scientists, analysts, and other stakeholders to understand their data requirements and provide them with access to the necessary data resources. They also work closely with IT operations teams to deploy and maintain data infrastructure in production environments. c. Documentation and Reporting: Document their work including data models, data pipelines/ETL processes, and system configurations. Create documentation and provide training to other team members to ensure the sustainability and maintainability of data systems. d. Continuous Learning: Stay updated with the latest technologies and trends in data engineering and related fields. Should participate in training programs, attend conferences, and engage with the data engineering community to enhance their skills and knowledge. Desired Skills/ Competencies Education: A Bachelor's or Master's degree in Computer Science, Software Engineering, Data Science, or equivalent with at least 5 years of experience. Database Management: Strong expertise in working with databases, such as SQL databases (e.g., MySQL, PostgreSQL) and NoSQL databases (e.g., MongoDB, Cassandra). Big Data Technologies: Familiarity with big data technologies, such as Apache Hadoop, Spark, and related ecosystem components, for processing and analyzing large-scale datasets. ETL Tools: Experience with ETL tools (e.g., Apache NiFi, Talend, Apache Airflow, Talend Open Studio, Pentaho, Infosphere) for designing and orchestrating data workflows. Data Modeling and Warehousing: Knowledge of data modeling techniques and experience with data warehousing solutions (e.g., Amazon Redshift, Google BigQuery, Snowflake). Data Governance and Security: Understanding of data governance principles and best practices for ensuring data quality and security. Cloud Computing: Experience with cloud platforms (e.g., AWS, Azure, Google Cloud) and their data services for scalable and cost-effective data storage and processing. Streaming Data Processing: Familiarity with real-time data processing frameworks (e.g., Apache Kafka, Apache Flink) for handling streaming data. KPIs: Data Pipeline Efficiency: Measure the efficiency of data pipelines in terms of data processing time, throughput, and resource utilization. KPIs could include average time to process data, data ingestion rates, and pipeline latency. Data Quality Metrics: Track data quality metrics such as completeness, accuracy, consistency, and timeliness of data. KPIs could include data error rates, missing values, data duplication rates, and data validation failures. System Uptime and Availability: Monitor the uptime and availability of data infrastructure, including databases, data warehouses, and data processing systems. KPIs could include system uptime percentage, mean time between failures (MTBF), and mean time to repair (MTTR). Data Storage Efficiency: Measure the efficiency of data storage systems in terms of storage utilization, data compression rates, and data retention policies. KPIs could include storage utilization rates, data compression ratios, and data storage costs per unit. Data Security and Compliance: Track adherence to data security policies and regulatory compliance requirements such as DPDP, GDPR, HIPAA, or PCI DSS. KPIs could include security incident rates, data access permissions, and compliance audit findings. Data Processing Performance: Monitor the performance of data processing tasks such as ETL (Extract, Transform, Load) processes, data transformations, and data aggregations. KPIs could include data processing time, CPU usage, and memory consumption. Scalability and Performance Tuning: Measure the scalability and performance of data systems under varying workloads and data volumes. KPIs could include scalability benchmarks, system response times under load, and performance improvements achieved through tuning. Resource Utilization and Cost Optimization: Track resource utilization and costs associated with data infrastructure, including compute resources, storage, and network bandwidth. KPIs could include cost per data unit processed, cost per query, and cost savings achieved through optimization. Incident Response and Resolution: Monitor the response time and resolution time for data-related incidents and issues. KPIs could include incident response time, time to diagnose and resolve issues, and customer satisfaction ratings for support services. Documentation and Knowledge Sharing : Measure the quality and completeness of documentation for data infrastructure, data pipelines, and data processes. KPIs could include documentation coverage, documentation update frequency, and knowledge sharing activities such as internal training sessions or knowledge base contributions. Years of experience of the current role holder New Position Ideal years of experience 3 – 5 years Career progression for this role CTO WGDT (Head of Incubation Centre) ******************************************************************************* Wadhwani Corporate Profile: (Click on this link) Our Culture: WF is a global not-for-profit, and works like a start-up, in a fast-moving, dynamic pace where change is the only constant and flexibility is the key to success. Three mantras that we practice across job roles, levels, functions, programs and initiatives, are Quality, Speed, Scale, in that order. We are an ambitious and inclusive organization, where everyone is encouraged to contribute and ideate. We are intensely and insanely focused on driving excellence in everything we do. We want individuals with the drive for excellence, and passion to do whatever it takes to deliver world class outcomes to our beneficiaries. We set our own standards often more rigorous than what our beneficiaries demand, and we want individuals who love it this way. We have a creative and highly energetic environment – one in which we look to each other to innovate new solutions not only for our beneficiaries but for ourselves too. Open to collaborate with a borderless mentality, often going beyond the hierarchy and siloed definitions of functional KRAs, are the individuals who will thrive in our environment. This is a workplace where expertise is shared with colleagues around the globe. Individuals uncomfortable with change, constant innovation, and short learning cycles and those looking for stability and orderly working days may not find WF to be the right place for them. Finally, we want individuals who want to do greater good for the society leveraging their area of expertise, skills and experience. The foundation is an equal opportunity firm with no bias towards gender, race, colour, ethnicity, country, language, age and any other dimension that comes in the way of progress. Join us and be a part of us! Bachelors in Technology / Masters in Technology

Posted 10 hours ago

Apply

8.0 - 13.0 years

6 - 8 Lacs

Hyderābād

On-site

GlassDoor logo

India - Hyderabad JOB ID: R-219115 ADDITIONAL LOCATIONS: India - Hyderabad WORK LOCATION TYPE: On Site DATE POSTED: Jun. 27, 2025 CATEGORY: Information Systems Join Amgen’s Mission of Serving Patients At Amgen, if you feel like you’re part of something bigger, it’s because you are. Our shared mission—to serve patients living with serious illnesses—drives all that we do. Since 1980, we’ve helped pioneer the world of biotech in our fight against the world’s toughest diseases. With our focus on four therapeutic areas –Oncology, Inflammation, General Medicine, and Rare Disease– we reach millions of patients each year. As a member of the Amgen team, you’ll help make a lasting impact on the lives of patients as we research, manufacture, and deliver innovative medicines to help people live longer, fuller happier lives. Our award-winning culture is collaborative, innovative, and science based. If you have a passion for challenges and the opportunities that lay within them, you’ll thrive as part of the Amgen team. Join us and transform the lives of patients while transforming your career. Specialist IS Bus Sys Analyst, Neural Nexus What you will do Let’s do this. Let’s change the world. In this vital role you will support the delivery of emerging AI/ML capabilities within Amgen's Neural Nexus program. As part of the Commercial Technology Data & Analytics team, you will collaborate with product owners and cross-functional partners to help design, implement, and iterate on a layered ecosystem passionate about DIAL (Data, Insights, Action, and Learning). Collaborate with the Commercial Data & Analytics (CD&A) team to help realize business value through the application of commercial data and emerging AI/ML technologies. Support delivery activities within the Scaled Agile Framework (SAFe), partnering with Engineering and Product Management to shape roadmaps, prioritize releases, and maintain a refined product backlog. Contribute to backlog management by helping break down Epics into Features and Sprint-ready User Stories, ensuring clear articulation of requirements and well-defined Acceptance Criteria and Definitions of Done. Ensure non-functional requirements are represented and prioritized within the backlog to maintain performance, scalability, and compliance standards. Collaborate with UX to align technical requirements, business processes, and scenarios with user-centered design. Assist in the development and delivery of engaging product demonstrations for internal and external partners. Support documentation efforts to maintain accurate records of system configurations, processes, and enhancements. Contribute to the launch and growth of Neural Nexus product teams focused on data connectivity, predictive modeling, and fast-cycle value delivery for commercial teams. Provide input for governance discussions and help prepare materials to support executive alignment on technology strategy and investment. What we expect of you We are all different, yet we all use our unique contributions to serve patients. We are seeking a highly skilled and experienced Specialist IS Business Analyst with a passion for innovation and a collaborative working style that partners effectively with business and technology leaders with these qualifications. Basic Qualifications: Doctorate degree / Master's degree / Bachelor's degree with 8 to 13 years of experience in Information Systems Experience with writing user requirements and acceptance criteria Affinity to work in a DevOps environment and Agile mind set Ability to work in a team environment, effectively interacting with others Ability to meet deadlines and schedules and be accountable Must-Have Skills Excellent problem-solving skills and a passion for solving complex challenges in for AI-driven technologies Experience with Agile software development methodologies (Scrum) Superb communication skills and the ability to work with senior leadership with confidence and clarity Has experience with writing user requirements and acceptance criteria in agile project management systems such as JIRA Experience in managing product features for PI planning and developing product roadmaps and user journeys Good-to-Have Skills: Demonstrated expertise in data and analytics and related technology concepts Understanding of data and analytics software systems strategy, governance, and infrastructure Familiarity with low-code, no-code test automation software Technical thought leadership Able to communicate technical or complex subject matters in business terms Jira Align experience Experience of DevOps, Continuous Integration, and Continuous Delivery methodology Soft Skills: Able to work under minimal supervision Excellent analytical and gap/fit assessment skills Strong verbal and written communication skills Team-oriented, with a focus on achieving team goals Strong presentation and public speaking skills Technical Skills: Experience with cloud-based data technologies (e.g., Databricks, Redshift, S3 buckets) AWS (similar cloud-based platforms) Experience with design patterns, data structures, test-driven development Knowledge of NLP techniques for text analysis and sentiment analysis What you can expect of us As we work to develop treatments that take care of others, we also work to care for your professional and personal growth and well-being. From our competitive benefits to our collaborative culture, we’ll support your journey every step of the way. In addition to the base salary, Amgen offers competitive and comprehensive Total Rewards Plans that are aligned with local industry standards. and make a lasting impact with the Amgen team. careers.amgen.com As an organization dedicated to improving the quality of life for people around the world, Amgen fosters an inclusive environment of diverse, ethical, committed and highly accomplished people who respect each other and live the Amgen values to continue advancing science to serve patients. Together, we compete in the fight against serious disease. Amgen is an Equal Opportunity employer and will consider all qualified applicants for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, disability status, or any other basis protected by applicable law. We will ensure that individuals with disabilities are provided reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request accommodation.

Posted 10 hours ago

Apply

40.0 years

0 Lacs

Hyderābād

On-site

GlassDoor logo

India - Hyderabad JOB ID: R-219193 ADDITIONAL LOCATIONS: India - Hyderabad WORK LOCATION TYPE: On Site DATE POSTED: Jun. 27, 2025 CATEGORY: Information Systems ABOUT AMGEN Amgen harnesses the best of biology and technology to fight the world’s toughest diseases, and make people’s lives easier, fuller and longer. We discover, develop, manufacture and deliver innovative medicines to help millions of patients. Amgen helped establish the biotechnology industry more than 40 years ago and remains on the cutting-edge of innovation, using technology and human genetic data to push beyond what’s known today. ABOUT THE ROLE Role Description: As the Sr Manager of Business Intelligence, you will lead the development and delivery of enterprise-wide BI capabilities that enable data-driven decision-making. You will oversee a team of BI engineers, analysts, and data scientists to deliver scalable, high-impact solutions that support strategic initiatives across the organization. This role requires a strong blend of technical leadership, product vision, and stakeholder engagement to drive innovation and operational excellence in BI. Roles & Responsibilities: Lead the end-to-end delivery of BI products and features, from concept through release and lifecycle management. Manage a cross-functional team of engineers, analysts, and product owners to ensure business, quality, and functional goals are met. Define and prioritize the BI roadmap, incorporating stakeholder feedback and aligning with enterprise strategy. Drive excellence in BI engineering practices, including data modeling, visualization, and performance optimization. Collaborate with partner teams to ensure seamless integration of BI solutions with enterprise data platforms. Establish and monitor KPIs to measure team performance and product impact. Communicate vision, progress, and outcomes to senior leadership and stakeholders. Foster a culture of innovation, continuous improvement, and high performance. Basic Qualifications and Experience: 5+ years of managerial experience directly managing people and/or leading teams, projects, or programs AND one of the following: Doctorate degree and 2 years of Information Systems experience OR Master’s degree and 6 years of Information Systems experience OR Bachelor’s degree and 8 years of Information Systems experience OR Associate’s degree and 10 years of Information Systems experience Functional Skills: Deep expertise in BI platforms and data visualization tools (e.g., Power BI, Tableau, Cognos) Strong understanding of data integration, enterprise data fabric, and cloud technologies (e.g., AWS, Databricks) Proven experience building and leading high-performing BI teams Demonstrated ability to manage product roadmaps, secure funding, and deliver measurable outcomes In-depth knowledge of Agile methodologies and software development lifecycle Good-to-Have Skills: Experience with AWS services (e.g., S3, EMR, Redshift, Athena) Familiarity with DevOps tools and cloud-native design patterns Prior experience in vendor management and financial oversight Understanding of cloud security and compliance frameworks Professional Certifications (please mention if the certification is preferred or mandatory for the role): AWS Developer certification (preferred) Certified DevOps Engineer (preferred) Certified Agile Leader or similar (preferred) SAFe for Teams Certification (preferred) Soft Skills: Strong strategic thinking and decision-making abilities Excellent communication and stakeholder management skills High attention to detail and critical thinking Ability to influence and energize cross-functional teams Proactive, self-motivated, and adaptable to change Strong presentation and public speaking skills Effective in managing multiple priorities and global virtual teams Shift Information: This position may require working a second or third shift based on business needs. Candidates must be willing and able to work during evening or night shifts if required. EQUAL OPPORTUNITY STATEMENT Amgen is an Equal Opportunity employer and will consider you without regard to your race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, or disability status. We will ensure that individuals with disabilities are provided with reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request an accommodation.

Posted 10 hours ago

Apply

8.0 years

4 - 10 Lacs

Hyderābād

On-site

GlassDoor logo

India - Hyderabad JOB ID: R-217915 ADDITIONAL LOCATIONS: India - Hyderabad WORK LOCATION TYPE: On Site DATE POSTED: Jun. 27, 2025 CATEGORY: Information Systems Join Amgen’s Mission of Serving Patients At Amgen, if you feel like you’re part of something bigger, it’s because you are. Our shared mission—to serve patients living with serious illnesses—drives all that we do. Since 1980, we’ve helped pioneer the world of biotech in our fight against the world’s toughest diseases. With our focus on four therapeutic areas –Oncology, Inflammation, General Medicine, and Rare Disease– we reach millions of patients each year. As a member of the Amgen team, you’ll help make a lasting impact on the lives of patients as we research, manufacture, and deliver innovative medicines to help people live longer, fuller happier lives. Our award-winning culture is collaborative, innovative, and science based. If you have a passion for challenges and the opportunities that lay within them, you’ll thrive as part of the Amgen team. Join us and transform the lives of patients while transforming your career. Sr Mgr Software Development Engineering What you will do Let’s do this. Let’s change the world. In this vital role you will Provide technical leadership to enhance the culture of innovation, automation, and solving difficult scientific and business challenges. Technical leadership includes providing vision and direction to develop scalable reliable solutions. Provide leadership to select right-sized and appropriate tools and architectures based on requirements, data source format, and current technologies Develop, refactor, research and improve Weave cloud platform capabilities. Understand business drivers and technical needs so our cloud services seamlessly, automatically, and securely provides them the best service. Develop data flow pipelines to extract, transform, and load data from various data sources in various forms, including custom ETL pipelines that enable model and product development Build strong partnership with partner Build data products and service processes which perform data transformation, metadata extraction, workload management and error processing management to ensure high quality data Provide clear documentation for delivered solutions and processes, integrating documentation Collaborate with business partners to understand user stories and ensure technical solution/build can deliver to those needs Work with multi-functional teams to design and document effective and efficient solutions. Develop organisational change strategies and assist in their implementation. Mentor junior data engineers on standard processes in the industry and in the Amgen data landscape What we expect of you We are all different, yet we all use our unique contributions to serve patients. The professional we seek is someone with these qualifications. Basic Qualifications: Doctorate degree / Master's degree / Bachelor's degree and 8 to 13 years of relevant experience Must-Have Skills: Superb communication and interpersonal skills, with the ability to work closely with multi-functional GTM, product, and engineering teams. Minimum of 10+ years overall Software Engineer or Cloud Architect experience Minimum 3+ years in architecture role using public cloud solutions such as AWS Experience with AWS Technology stack Good-to-Have Skills: Familiarity with big data technologies, AI platforms, and cloud-based data solutions. Ability to work effectively across matrixed organizations and lead collaboration between data and AI teams. Passion for technology and customer success, particularly in driving innovative AI and data solutions. Experience working with teams of data scientists, software engineers and business experts to drive insights Experience with AWS Services such as EC2, S3, Redshift/Spectrum, Glue, Athena, RDS, Lambda, and API gateway. Experience with Big Data Technologies (Hadoop, Hive, Hbase, Pig, Spark, etc) Solid understanding of relevant data standards and industry trends Ability to understand new business requirements and prioritize them for delivery Experience working in biopharma/life sciences industry Proficient in one of the coding languages (Python, Java, Scala) Hands on experience writing SQL using any RDBMS (Redshift, Postgres, MySQL, Teradata, Oracle, etc.). Experience with Schema Design & Dimensional data modeling. Experience with software DevOps CI/CD tools, such Git, Jenkins, Linux, and Shell Script Hands on experience using Databricks/Jupyter or similar notebook environment. Experience working with GxP systems Experience working in an agile environment (i.e. user stories, iterative development, etc.) Experience working with test-driven development and software test automation Experience working in a Product environment Good overall understanding of business, manufacturing, and laboratory systems common in the pharmaceutical industry, as well as the integration of these systems through applicable standards. Soft Skills: Excellent analytical and solving skills. Ability to work effectively with global, virtual teams High degree of initiative and self-motivation. Ability to manage multiple priorities successfully. Team-oriented, with a focus on achieving team goals What you can expect of us As we work to develop treatments that take care of others, we also work to care for your professional and personal growth and well-being. From our competitive benefits to our collaborative culture, we’ll support your journey every step of the way. In addition to the base salary, Amgen offers competitive and comprehensive Total Rewards Plans that are aligned with local industry standards. and make a lasting impact with the Amgen team. careers.amgen.com As an organization dedicated to improving the quality of life for people around the world, Amgen fosters an inclusive environment of diverse, ethical, committed and highly accomplished people who respect each other and live the Amgen values to continue advancing science to serve patients. Together, we compete in the fight against serious disease. Amgen is an Equal Opportunity employer and will consider all qualified applicants for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, disability status, or any other basis protected by applicable law. We will ensure that individuals with disabilities are provided reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request accommodation.

Posted 10 hours ago

Apply

9.0 - 14.0 years

5 - 9 Lacs

Hyderābād

On-site

GlassDoor logo

India - Hyderabad JOB ID: R-219095 ADDITIONAL LOCATIONS: India - Hyderabad WORK LOCATION TYPE: On Site DATE POSTED: Jun. 27, 2025 CATEGORY: Information Systems Join Amgen’s Mission of Serving Patients At Amgen, if you feel like you’re part of something bigger, it’s because you are. Our shared mission—to serve patients living with serious illnesses—drives all that we do. Since 1980, we’ve helped pioneer the world of biotech in our fight against the world’s toughest diseases. With our focus on four therapeutic areas –Oncology, Inflammation, General Medicine, and Rare Disease– we reach millions of patients each year. As a member of the Amgen team, you’ll help make a lasting impact on the lives of patients as we research, manufacture, and deliver innovative medicines to help people live longer, fuller happier lives. Our award-winning culture is collaborative, innovative, and science based. If you have a passion for challenges and the opportunities that lay within them, you’ll thrive as part of the Amgen team. Join us and transform the lives of patients while transforming your career. Specialist IS Analyst What you will do Let’s do this. Let’s change the world. In this vital role you will be part of Enterprise Data Fabric (EDF) Platform team. In this role you will be leveragingAI and other automation tools to innovate and provide solutions for business. The role leverages domain and business process expertise to detail product requirements as epics and user stories, along with supporting artifacts like business process maps, use cases, and test plans for Enterprise Data Fabric (EDF) Platform team. This role involves working closely with varied business stakeholders - business users, data engineers, data analysts, and testers to ensure that the technical requirements for upcoming development are thoroughly elaborated. This enables the delivery team to estimate, plan, and commit to delivery with high confidence and identify test cases and scenarios to ensure the quality and performance of IT Systems. In this roleyou will analyze business requirements and help design solutions for the EDF platform. You will collaborate with multi-functional teams to understand business needs, identify system enhancements, and drive system implementation projects. Experience in business analysis, system design, and project management will enable this role to deliver innovative and effective technology products. What we expect of you Roles & Responsibilities Collaborate with System Architects and Product Owners to manage business analysis activities for systems, ensuring alignment with engineering and product goals Capture the voice of the customer to define business processes and product needs Collaborate with business stakeholders, Architects and Engineering teams to prioritize release scopes and refine the Product backlog Facilitate the breakdown of Epics into Features and Sprint-sized User Stories and participate in backlog reviews with the development team Clearly express features in User Stories/requirements so all team members and stakeholders understand how they fit into the product backlog Ensure Acceptance Criteria and Definition of Done are well-defined Stay focused on software development to ensure it meets requirements, providing proactive feedback to stakeholders Develop and execute effective product demonstrations for internal and external stakeholders Help develop and maintain a product roadmap that clearly outlines the planned features and enhancements, timelines, and achievements Identify and manage risks associated with the systems, requirement validation, and user acceptance Develop & maintaindocumentation of configurations, processes, changes, communication plans and training plans for end users Ensure operational excellence, cybersecurity, and compliance. Collaborate with geographically dispersed teams, including those in the US and other international locations. Foster a culture of collaboration, innovation, and continuous improvement Ability to work flexible hours that align with US time zones Basic Qualifications : Master’s degree with9 - 14 years of experience in Computer Science, Business, Engineering, IT or related field OR Bachelor’s degree with 10 - 14 years of experience in Computer Science, Business, Engineering, IT or related field OR Diploma with 10 - 14 years of experience in Computer Science, Business, Engineering, IT or related field. Must - have Skills : Proven ability in translating business requirements into technical specifications and writing user requirement documents. Able to communicate technical or complex subject matters in business terms Experience with Agile software development methodologies (Scrum) Excellent communication skills and the ability to interface with senior leadership with confidence and clarity Strong knowledge of data engineering processes Experience in managing product features for PI planning and developing product roadmaps and user journeys Technical thought leadership Good-to-have Skills : Experience maintaining SaaS (software as a system) solutions and COTS (Commercial off the shelf) solutions Experience with AWS Services (like EC2, S3), Salesforce, Jira, and API gateway, etc. Hands on experience writing SQL using any RDBMS (Redshift, Postgres, MySQL, Teradata, Oracle, etc.) Experience in understanding micro services architecture and API development Experience with data analysis, data modeling, and data visualization solutions such as Tableau and Spotfire Professional Certifications: SAFe for Teams certification (preferred) Certified Business Analysis Professional (Preferred) Soft Skills : Excellent critical-thinking, analytical and problem-solving skills Strong verbal & written communication and collaboration skills Demonstrated awareness of how to function in a team setting Strong presentation and public speaking skills Ability to work effectively with global, virtual teams Ability to manage multiple priorities successfully High degree of initiative and self-motivation Ability to work under minimal supervision Skilled in providing oversight and mentoring team members. Demonstrated ability in effectively delegating work Team-oriented, with a focus on achieving team goals What you can expect of us As we work to develop treatments that take care of others, we also work to care for your professional and personal growth and well-being. From our competitive benefits to our collaborative culture, we’ll support your journey every step of the way. In addition to the base salary, Amgen offers competitive and comprehensive Total Rewards Plans that are aligned with local industry standards. for a career that defies imagination Objects in your future are closer than they appear. Join us. careers.amgen.com As an organization dedicated to improving the quality of life for people around the world, Amgen fosters an inclusive environment of diverse, ethical, committed and highly accomplished people who respect each other and live the Amgen values to continue advancing science to serve patients. Together, we compete in the fight against serious disease. Amgen is an Equal Opportunity employer and will consider all qualified applicants for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, disability status, or any other basis protected by applicable law. We will ensure that individuals with disabilities are provided reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request accommodation.

Posted 10 hours ago

Apply

130.0 years

3 - 10 Lacs

Hyderābād

On-site

GlassDoor logo

Job Description Associate Manager, Scientific Data Engineering The Opportunity Based in Hyderabad, join a global healthcare biopharma company and be part of a 130-year legacy of success backed by ethical integrity, forward momentum, and an inspiring mission to achieve new milestones in global healthcare. Be part of an organisation driven by digital technology and data-backed approaches that support a diversified portfolio of prescription medicines, vaccines, and animal health products. Drive innovation and execution excellence. Join a team that is passionate about using data, analytics, and insights to drive decision-making and create custom software, allowing us to tackle some of the world's greatest health threats. Our Technology Centers focus on creating a space where teams can come together to deliver business solutions that save and improve lives. An integral part of our company's IT operating model, Tech Centers are globally distributed locations where each IT division has employees to enable our digital transformation journey and drive business outcomes. These locations, in addition to the other sites, are essential to supporting our business and strategy. A focused group of leaders in each Tech Center helps ensure we can manage and improve each location, from investing in the growth, success, and well-being of our people to making sure colleagues from each IT division feel a sense of belonging, to managing critical emergencies. Together, we must leverage the strength of our team to collaborate globally to optimize connections and share best practices across the Tech Centers. Role Overview Design, develop, and maintain data pipelines to extract data from various sources and populate a data lake and data warehouse. Work closely with data scientists, analysts, and business teams to understand data requirements and deliver solutions aligned with business goals. Build and maintain platforms that support data ingestion, transformation, and orchestration across various data sources, both internal and external. Use data orchestration, logging, and monitoring tools to build resilient pipelines. Automate data flows and pipeline monitoring to ensure scalability, performance, and resilience of the platform. Monitor, troubleshoot, and resolve issues related to the data integration platform, ensuring uptime and reliability. Maintain thorough documentation for integration processes, configurations, and code to ensure easy onboarding for new team members and future scalability. Develop pipelines to ingest data into cloud data warehouses. Establish, modify and maintain data structures and associated components. Create and deliver standard reports in accordance with stakeholder needs and conforming to agreed standards. Work within a matrix organizational structure, reporting to both the functional manager and the project manager. Participate in project planning, execution, and delivery, ensuring alignment with both functional and project goals. What should you have Bachelors’ degree in Information Technology, Computer Science or any Technology stream. 1 to 3 years of developing data pipelines & data infrastructure, ideally within a drug development or life sciences context. Demonstrated expertise in delivering large-scale information management technology solutions encompassing data integration and self-service analytics enablement. Experienced in software/data engineering practices (including versioning, release management, deployment of datasets, agile & related software tools). Ability to design, build and unit test applications on Spark framework on Python . Build PySpark based applications for both batch and streaming requirements, which will require in-depth knowledge on Databricks/ Hadoop. Experience working with storage frameworks like Delta Lake/ Iceberg Experience working with MPP Datawarehouse’s like Redshift Cloud-native, ideally AWS certified. Strong working knowledge of at least one Reporting/Insight generation technology Good interpersonal and communication skills (verbal and written). Proven record of delivering high-quality results. Product and customer-centric approach. Innovative thinking, experimental mindset. Mandatory Skills Skill Category Skills Foundational Data Concepts SQL (Intermediate / Advanced) Python (Intermediate) Cloud Fundamentals (AWS Focus) AWS Console, IAM roles, regions, concept of cloud computing AWS S3 Data Processing & Transformation Apache Spark (Concepts & Usage) Databricks (Platform Usage), Unity Catalog, Delta Lake ETL & Orchestration AWS Glue (ETL, Catalog), Lambda Apache Airflow (DAGs and Orchestration) or other orchestration tool dbt (Data Build Tool) Matillion (or similar ETL tool) Data Storage & Querying Amazon Redshift / Azure Synapse Trino / Equivalent AWS Athena / Query Federation Data Quality & Governance Data Quality Concepts / Implementation Data Observability Concepts Collibra / equivalent tool Real-time / Streaming Apache Kafka (Concepts & Usage) DevOps & Automation CI / CD concepts, Pipelines (GitHub Actions / Jenkins / Azure DevOps) Our technology teams operate as business partners, proposing ideas and innovative solutions that enable new organizational capabilities. We collaborate internationally to deliver services and solutions that help everyone be more productive and enable innovation. Who we are: We are known as Merck & Co., Inc., Rahway, New Jersey, USA in the United States and Canada and MSD everywhere else. For more than a century, we have been inventing for life, bringing forward medicines and vaccines for many of the world's most challenging diseases. Today, our company continues to be at the forefront of research to deliver innovative health solutions and advance the prevention and treatment of diseases that threaten people and animals around the world. What we look for: Imagine getting up in the morning for a job as important as helping to save and improve lives around the world. Here, you have that opportunity. You can put your empathy, creativity, digital mastery, or scientific genius to work in collaboration with a diverse group of colleagues who pursue and bring hope to countless people who are battling some of the most challenging diseases of our time. Our team is constantly evolving, so if you are among the intellectually curious, join us—and start making your impact today. #HYDIT2025 Current Employees apply HERE Current Contingent Workers apply HERE Search Firm Representatives Please Read Carefully Merck & Co., Inc., Rahway, NJ, USA, also known as Merck Sharp & Dohme LLC, Rahway, NJ, USA, does not accept unsolicited assistance from search firms for employment opportunities. All CVs / resumes submitted by search firms to any employee at our company without a valid written search agreement in place for this position will be deemed the sole property of our company. No fee will be paid in the event a candidate is hired by our company as a result of an agency referral where no pre-existing agreement is in place. Where agency agreements are in place, introductions are position specific. Please, no phone calls or emails. Employee Status: Regular Relocation: VISA Sponsorship: Travel Requirements: Flexible Work Arrangements: Hybrid Shift: Valid Driving License: Hazardous Material(s): Required Skills: Business Intelligence (BI), Database Administration, Data Engineering, Data Management, Data Modeling, Data Visualization, Design Applications, Information Management, Software Development, Software Development Life Cycle (SDLC), System Designs Preferred Skills: Job Posting End Date: 08/26/2025 A job posting is effective until 11:59:59PM on the day BEFORE the listed job posting end date. Please ensure you apply to a job posting no later than the day BEFORE the job posting end date. Requisition ID: R353468

Posted 10 hours ago

Apply

40.0 years

0 Lacs

Hyderābād

On-site

GlassDoor logo

India - Hyderabad JOB ID: R-219186 ADDITIONAL LOCATIONS: India - Hyderabad WORK LOCATION TYPE: On Site DATE POSTED: Jun. 27, 2025 CATEGORY: Information Systems ABOUT AMGEN Amgen harnesses the best of biology and technology to fight the world’s toughest diseases, and make people’s lives easier, fuller and longer. We discover, develop, manufacture and deliver innovative medicines to help millions of patients. Amgen helped establish the biotechnology industry more than 40 years ago and remains on the cutting-edge of innovation, using technology and human genetic data to push beyond what’s known today. ABOUT THE ROLE Role Description: As an Associate BI Engineer, you will support the development and delivery of data-driven solutions that enable business insights and operational efficiency. You will work closely with senior BI engineers, analysts, and stakeholders to build dashboards, analyze data, and contribute to the design of scalable reporting systems. This is an ideal role for early-career professionals looking to grow their technical and analytical skills in a collaborative environment. Roles & Responsibilities: Assist in designing and maintaining dashboards and reports using tools like Power BI, Tableau, or Cognos. Perform basic data analysis to identify trends and support business decisions. Collaborate with team members to gather requirements and translate them into technical specifications. Support data validation, testing, and documentation efforts. Learn and apply best practices in data modeling, visualization, and BI development. Participate in Agile ceremonies and contribute to sprint planning and backlog grooming. Basic Qualifications and Experience: Bachelor’s degree and 0 to 3 years of Computer Science, IT or related field experience OR Diploma and 3 to 5 years of Computer Science, IT or related field experience Functional Skills: Exposure to data visualization tools such as PowerBI, Tableau, or QuickSight. Basic proficiency in SQL and scripting languages (e.g., Python) for data processing and analysis Familiarity with data modeling, warehousing, and ETL pipelines Understanding of data structures and reporting concepts Strong analytical and problem-solving skills Good-to-Have Skills: Familiarity with Cloud services like AWS (e.g., Redshift, S3, EC2) Understanding of Agile methodologies (Scrum, SAFe) Knowledge of DevOps, CI/CD practices Familiarity with scientific or healthcare data domains Soft Skills: Strong verbal and written communication skills Willingness to learn and take initiative Ability to work effectively in a team environment Attention to detail and commitment to quality Ability to manage time and prioritize tasks effectively Shift Information: This position may require working a second or third shift based on business needs. Candidates must be willing and able to work during evening or night shifts if required. EQUAL OPPORTUNITY STATEMENT Amgen is an Equal Opportunity employer and will consider you without regard to your race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, or disability status. We will ensure that individuals with disabilities are provided with reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request an accommodation.

Posted 10 hours ago

Apply

3.0 years

6 - 6 Lacs

Bengaluru

On-site

GlassDoor logo

- 3+ years of data engineering experience - Experience with data modeling, warehousing and building ETL pipelines - Experience with SQL - Bachelor's degree Are you passionate about data and code? Does the prospect of dealing with mission-critical data excite you? Do you want to build data engineering solutions that process a broad range of business and customer data? Do you want to continuously improve the systems that enable annual worldwide revenue of hundreds of billions of dollars? If so, then the eCommerce Services (eCS) team is for you! In eCommerce Services (eCS), we build systems that span the full range of eCommerce functionality, from Privacy, Identity, Purchase Experience and Ordering to Shipping, Tax and Financial integration. eCommerce Services manages several aspects of the customer life cycle, starting from account creation and sign in, to placing items in the shopping cart, proceeding through checkout, order processing, managing order history and post-fulfillment actions such as refunds and tax invoices. eCS services determine sales tax and shipping charges, and we ensure the privacy of our customers. Our mission is to provide a commerce foundation that accelerates business innovation and delivers a secure, available, performant, and reliable shopping experience to Amazon’s customers. The goal of the eCS Data Engineering and Analytics team is to provide high quality, on-time reports to Amazon business teams, enabling them to expand globally at scale. Our team has a direct impact on retail CX, a key component that runs our Amazon fly wheel. As a Data Engineer, you will own the architecture of DW solutions for the Enterprise using multiple platforms. You would have the opportunity to lead the design, creation and management of extremely large datasets working backwards from business use cases. You will use your strong business and communication skills to be able to work with business analysts and engineers to determine how best to design the data warehouse for reporting and analytics. You will be responsible for designing and implementing scalable ETL processes in the data warehouse platform to support the rapidly growing and dynamic business demand for data and use it to deliver the data as service which will have an immediate influence on day-to-day decision making. Key job responsibilities - Develop data products, infrastructure and data pipelines leveraging AWS services (such as Redshift, Kinesis, EMR, Lambda etc.) and internal BDT tools (DataNet, Cradle, Quick Sight etc. - Improve existing solutions and come up with next generation Data Architecture to improve scale, quality, timeliness, coverage, monitoring and security. - Develop new data models and end to data pipelines. - Create and implement Data Governance strategy for mitigating privacy and security risks. Experience with AWS technologies like Redshift, S3, AWS Glue, EMR, Kinesis, FireHose, Lambda, and IAM roles and permissions Experience with non-relational databases / data stores (object storage, document or key-value stores, graph databases, column-family databases) Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner.

Posted 10 hours ago

Apply

3.0 years

0 Lacs

Bengaluru

On-site

GlassDoor logo

Voyager (94001), India, Bangalore, Karnataka Principal Associate - Data Engineer Do you love building and pioneering in the technology space? Do you enjoy solving complex business problems in a fast-paced, collaborative, inclusive, and iterative delivery environment? At Capital One, you'll be part of a big group of makers, breakers, doers and disruptors, who solve real problems and meet real customer needs. We are seeking Data Engineers who are passionate about marrying data with emerging technologies. As a Capital One Data Engineer, you’ll have the opportunity to be on the forefront of driving a major transformation within Capital One. What You’ll Do: Collaborate with and across Agile teams to design, develop, test, implement, and support technical solutions in full-stack development tools and technologies Work with a team of developers with deep experience in machine learning, distributed microservices, and full stack systems Utilize programming languages like Java, Scala, Python and Open Source RDBMS and NoSQL databases and Cloud based data warehousing services such as Redshift and Snowflake Share your passion for staying on top of tech trends, experimenting with and learning new technologies, participating in internal & external technology communities, and mentoring other members of the engineering community Collaborate with digital product managers, and deliver robust cloud-based solutions that drive powerful experiences to help millions of Americans achieve financial empowerment Perform unit tests and conduct reviews with other team members to make sure your code is rigorously designed, elegantly coded, and effectively tuned for performance Basic Qualifications: Bachelor’s Degree At least 3 years of experience in application development (Internship experience does not apply) At least 1 year of experience in big data technologies Preferred Qualifications: 5+ years of experience in application development including Python, SQL, Scala, or Java 2+ years of experience with a public cloud (AWS, Microsoft Azure, Google Cloud) 3+ years experience with Distributed data/computing tools (MapReduce, Hadoop, Hive, EMR, Kafka, Spark, Gurobi, or MySQL) 2+ years experience working on real-time data and streaming applications 2+ years of experience with NoSQL implementation (Mongo, Cassandra) 2+ years of data warehousing experience (Redshift or Snowflake) 3+ years of experience with UNIX/Linux including basic commands and shell scripting 2+ years of experience with Agile engineering practices At this time, Capital One will not sponsor a new applicant for employment authorization for this position. No agencies please. Capital One is an equal opportunity employer (EOE, including disability/vet) committed to non-discrimination in compliance with applicable federal, state, and local laws. Capital One promotes a drug-free workplace. Capital One will consider for employment qualified applicants with a criminal history in a manner consistent with the requirements of applicable laws regarding criminal background inquiries, including, to the extent applicable, Article 23-A of the New York Correction Law; San Francisco, California Police Code Article 49, Sections 4901-4920; New York City’s Fair Chance Act; Philadelphia’s Fair Criminal Records Screening Act; and other applicable federal, state, and local laws and regulations regarding criminal background inquiries. If you have visited our website in search of information on employment opportunities or to apply for a position, and you require an accommodation, please contact Capital One Recruiting at 1-800-304-9102 or via email at RecruitingAccommodation@capitalone.com. All information you provide will be kept confidential and will be used only to the extent required to provide needed reasonable accommodations. For technical support or questions about Capital One's recruiting process, please send an email to Careers@capitalone.com Capital One does not provide, endorse nor guarantee and is not liable for third-party products, services, educational tools or other information available through this site. Capital One Financial is made up of several different entities. Please note that any position posted in Canada is for Capital One Canada, any position posted in the United Kingdom is for Capital One Europe and any position posted in the Philippines is for Capital One Philippines Service Corp. (COPSSC).

Posted 10 hours ago

Apply

12.0 - 15.0 years

6 - 10 Lacs

Bengaluru

On-site

GlassDoor logo

Organization: At CommBank, we never lose sight of the role we play in other people’s financial wellbeing. Our focus is to help people and businesses move forward, to progress. To make the right financial decisions and achieve their dreams, targets and aspirations. Each of us globally is dedicated to offering outstanding service, excellent advice and intuitive solutions to help our customers manage their finances in the ways they want to. Regardless of where you work within our organisation, your initiative, talent, ideas and energy all contribute to the impact that we can make with our work. Together we can achieve great things. Job Title: Staff Data Engineer Location: Bangalore- Manyata Tech park Business & Team: The Chief Data and Analytics Office (CDAO) – Data Platforms is in place to realise a data-driven organisation for Commonwealth Bank Group. We do this by solving complex problems around data for the business and activating data for strategic and sustained competitive advantage to enhance the financial wellbeing of our customers in a safe, sound and secure way. As we embark on our cloud journey, the team is focussed towards building robust data processing frameworks for the bank, enabling secure and seamless data processing for our tenants in simplified and automated way. Impact & Contribution: As a staff data engineer, you will be part of a large cloud engineering cohort that are all aligned on executing the technology strategy, working alongside AWS Engineers and Cloud Architects to help create scalable and federated cloud native systems. We are establishing a greenfield landscape from ground up, partnering with our stakeholders to design, develop and deliver frameworks that underpin our ability to deliver enhanced data solutions with robust controls for our internal customers deriving value for Banks customers. Bring your expertise in building scalable frameworks on cloud platform, with focus on automation and innovation. Roles & responsibilities: Possesses hands-on technical experience working in AWS. Individual should be have knowledge about AWS services like: S3, Athena, IAM Roles, Redshift, Glue, Lake Formation. Possesses hands-on technical experience in Pyspark Well versed in Unix Shell scripting, Python scripting and PL/SQL. Passionate about Cloud/DevOps/Automation and possess a keen interest to solve complex problems in a systematic approach. Possess the ability work independently and collaborate closely with team members and technology leads. Exhibit a proactive approach, constantly seeking innovative solutions to complex technical challenges. Showcase leadership qualities enabling progress of the team. Play a key role in mentoring junior developers and actively participate in design sessions ,and code reviews Essential Skills: Relevant experience of 12-15 years. Experience in Python scripting, Unix Scripting and PL/SQL Strong exposure to Pyspark Exposure to EMR Cloud experience is essential and exposure to AWS S3,Glue, Redshift, lake formation is needed. Understanding of CI/CD is desirable. Mentoring of junior developers Educational Qualifications: Bachelor’s degree in Engineering in Computer Science/Information Technology If you're already part of the Commonwealth Bank Group (including Bankwest, x15ventures), you'll need to apply through Sidekick to submit a valid application. We’re keen to support you with the next step in your career. We're aware of some accessibility issues on this site, particularly for screen reader users. We want to make finding your dream job as easy as possible, so if you require additional support please contact HR Direct on 1800 989 696. Advertising End Date: 17/07/2025

Posted 10 hours ago

Apply

0 years

5 - 7 Lacs

Bengaluru

On-site

GlassDoor logo

Job Description: Python Pyspark Data Engineer Job Location: Hyderabad / Bangalore / Chennai / Kolkata / Noida/ Gurgaon / Pune / Indore / Mumbai We are seeking a skilled Lead Data Engineer with strong programming and SQL skills to join our team. The ideal candidate will have hands-on experience with Python and Pyspark Data Analytics services and a basic understanding of general AWS services. Key Responsibilities: Design, develop, and optimize data pipelines using Python, Pyspark, AWS Data Analytics services such as RDS, DMS, Glue, Lambda, Redshift, and Athena . Implement data migration and transformation processes using AWS DMS and Glue . Work with SQL (Oracle & Postgres) to query, manipulate, and analyse large datasets. Develop and maintain ETL/ELT workflows for data ingestion and transformation. Utilize AWS services like S3, IAM, CloudWatch, and VPC to ensure secure and efficient data operations. Write clean and efficient Python scripts for automation and data processing. Collaborate with DevOps teams using Azure DevOps for CI/CD pipelines and infrastructure management. Monitor and troubleshoot data workflows to ensure high availability and performance. At DXC Technology, we believe strong connections and community are key to our success. Our work model prioritizes in-person collaboration while offering flexibility to support wellbeing, productivity, individual work styles, and life circumstances. We’re committed to fostering an inclusive environment where everyone can thrive. Recruitment fraud is a scheme in which fictitious job opportunities are offered to job seekers typically through online services, such as false websites, or through unsolicited emails claiming to be from the company. These emails may request recipients to provide personal information or to make payments as part of their illegitimate recruiting process. DXC does not make offers of employment via social media networks and DXC never asks for any money or payments from applicants at any point in the recruitment process, nor ask a job seeker to purchase IT or other equipment on our behalf. More information on employment scams is available here .

Posted 10 hours ago

Apply

5.0 - 8.0 years

0 Lacs

Andhra Pradesh

Remote

GlassDoor logo

We are seeking a Data Quality Analyst with Business Analyst expertise to support data engineering and governance initiatives. This hybrid role involves ensuring data accuracy and integrity through systematic QA processes, while also analyzing and documenting business workflows, data lineage, and functional requirements. The ideal candidate will act as a bridge between technical and business teams, playing a crucial role in validating data pipelines and ensuring process transparency. Key Responsibilities: Quality Assurance for Data Engineering: Validate data pipelines, ETL processes, and data transformations to ensure accuracy and completeness. Design and execute test cases to verify source-to-target data mapping and transformation logic. Identify data anomalies, inconsistencies, and quality issues, and collaborate with data engineers to resolve them. Work with large datasets using SQL and other data analysis tools for validation and troubleshooting. Business Analysis & Documentation: Collaborate with stakeholders to gather, analyze, and document data requirements, business rules, and workflows. reate clear and concise documentation including data flow diagrams, process maps, and requirement specifications. as-is and to-be states of data processes, ensuring alignment with business objectives and compliance requirements. Maintain traceability between business requirements, technical specifications, and QA validation. Workflow and Process Management: end-to-end understanding of data processes and usage across systems. to data governance efforts by defining data quality KPIs, validation rules, and reporting metrics. Participate in UAT and data reviews, ensuring business needs are met with high data quality. Required Skills and Qualifications: 5 8 years of experience in QA andor Business Analysis, ideally supporting data engineering or analytics teams. understanding of ETL/ELT workflows, data warehousing, and data lifecycle management. Proficiency in SQL for data validation and analysis. Experience working with tools such as JIRA, Confluence, Airflow (preferred), or similar workflow/documentation tools. Excellent skills in writing test plans, test cases, and business/technical documentation. Ability to interpret and document complex business processes and data flows. Strong communication and stakeholder management skills across technical and non-technical teams. Preferred Skills: Familiarity with cloud platforms such as AWS, Snowflake, Redshift, or BigQuery. Exposure to data catalog, lineage, or governance tools (e.g., Collibra, Alation) is a plus. Understanding of data privacy and compliance (e.g., GDPR, HIPAA) is a bonus.Edcation Bachelordegree in Computer Science, Information Systems, Business Analytics, or related field. Additional Information: This is an offshore (India-based) remote role with opportunities to work with global data and analytics teams. Ideal for QA professionals who have transitioned into or supported business analysis, especially in data-focused projects. About Virtusa Teamwork, quality of life, professional and personal development: values that Virtusa is proud to embody. When you join us, you join a team of 27,000 people globally that cares about your growth — one that seeks to provide you with exciting projects, opportunities and work with state of the art technologies throughout your career with us. Great minds, great potential: it all comes together at Virtusa. We value collaboration and the team environment of our company, and seek to provide great minds with a dynamic place to nurture new ideas and foster excellence. Virtusa was founded on principles of equal opportunity for all, and so does not discriminate on the basis of race, religion, color, sex, gender identity, sexual orientation, age, non-disqualifying physical or mental disability, national origin, veteran status or any other basis covered by appropriate law. All employment is decided on the basis of qualifications, merit, and business need.

Posted 10 hours ago

Apply

5.0 - 10.0 years

5 - 8 Lacs

Noida

On-site

GlassDoor logo

Posted On: 27 Jun 2025 Location: Noida, UP, India Company: Iris Software Why Join Us? Are you inspired to grow your career at one of India’s Top 25 Best Workplaces in IT industry? Do you want to do the best work of your life at one of the fastest growing IT services companies ? Do you aspire to thrive in an award-winning work culture that values your talent and career aspirations ? It’s happening right here at Iris Software. About Iris Software At Iris Software, our vision is to be our client’s most trusted technology partner, and the first choice for the industry’s top professionals to realize their full potential. With over 4,300 associates across India, U.S.A, and Canada, we help our enterprise clients thrive with technology-enabled transformation across financial services, healthcare, transportation & logistics, and professional services. Our work covers complex, mission-critical applications with the latest technologies, such as high-value complex Application & Product Engineering, Data & Analytics, Cloud, DevOps, Data & MLOps, Quality Engineering, and Business Automation. Working at Iris Be valued, be inspired, be your best. At Iris Software, we invest in and create a culture where colleagues feel valued, can explore their potential, and have opportunities to grow. Our employee value proposition (EVP) is about “Being Your Best” – as a professional and person. It is about being challenged by work that inspires us, being empowered to excel and grow in your career, and being part of a culture where talent is valued. We’re a place where everyone can discover and be their best version. Job Description Drives the overall software development lifecycle including working across functional teams to transform requirements into features, managing development teams and processes, and conducting software testing and maintenance. Specific project areas of focus include translating user requirements into technical specifications, writing code and managing the preparation of design specifications. Supports system design, provides advice on security requirements and debugs business systems and service applications. Applies deep knowledge of algorithms, data structures and programming languages to develop high quality technology applications and services - including tools, standards, and relevant software platforms based on business requirements. Translates user needs into technical specifications by understanding, conceptualizing, and facilitating technical requirements from PO/user. Analyzes, develops, tests, and implements new software programs, and documentation of entire software development life cycle execution. Performs preventative and corrective maintenance, troubleshooting and fault rectification of system and core software components. Ensures that code/configurations adhere to the security, logging, error handling, and performance standards and non-functional requirements. Evaluates new technologies for fit with the program/system/eco-system and the associated upstream and downstream impacts on process, data, and risk. Follows release management processes and standards and applies version controls. Assists in interpreting and documentation of client requirements. Focus is primarily on business/group within BMO; may have broader, enterprise-wide focus. Provides specialized consulting, analytical and technical support. Exercises judgment to identify, diagnose, and solve problems within given rules. Works independently and regularly handles non-routine situations. Broader work or accountabilities may be assigned as needed. Experience with Event-driven design / architecture Qualifications: Foundational level of proficiency: Creative thinking. Building and managing relationships. Emotional agility. Intermediate level of proficiency: Cloud computing Microservices. Technology Business Requirements Definition, Analysis and Mapping. Adaptability. Verbal & written communication skills. Analytical and problem-solving skills. Advanced level of proficiency: Programming Applications Integration. System Development Lifecycle. System and Technology Integration. Typically, between 5 - 10 years of relevant experience and post-secondary degree in related field of study or an equivalent combination of education and experience. Technology Required: Java Spring Boot framework OpenShift Python NodeJS Ansible Apache Kafka/Spark/Hadoop/HDFS Oracle Databases Linux/Unix/Windows Oracle IBM WebSphere/HIS Microservices Cloud Computing (AWS) AWS Lambda/SNS/SQS/DynamoDB/Redshift/CDK Event Driven Architecture Test Driven Development Agile/Scrum SDLC JSON and XML data notations Knowledge of ISO 20022 standard ServiceNow Mandatory Competencies Java - Core JAVA Fundamental Technical Skills - Spring Framework/Hibernate/Junit etc. Database - SQL Java Others - Spring Boot Cloud - AWS Java Others - Kafka Fundamental Technical Skills - Programming Multithreading / Collections Fundamental Technical Skills - OOPS/Design Architecture - Micro Service Perks and Benefits for Irisians At Iris Software, we offer world-class benefits designed to support the financial, health and well-being needs of our associates to help achieve harmony between their professional and personal growth. From comprehensive health insurance and competitive salaries to flexible work arrangements and ongoing learning opportunities, we're committed to providing a supportive and rewarding work environment. Join us and experience the difference of working at a company that values its employees' success and happiness.

Posted 10 hours ago

Apply

3.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Description Payroll Technology at Amazon is all about enabling our business to perform at scale as efficiently as possible with no defects. As Amazon's workforce grows, both in size and geography, Amazon's payroll operations become increasingly complex, and our customers are asked to do more with less. Process can only get them so far, and that's where we come in with technology solutions to integrate and automate systems, detect defects before payment, and provide insights. As a data engineer in payroll, you will have to onboard payroll vendors across various geographies by building versatile and scalable design solutions. Having strong written and verbal communication, and the ability to communicate with end users in non-technical terms, is vital to your long-term success. The ideal candidate will have experience working with large datasets, distributed computing technologies and service-oriented architecture. The candidate should relish working with large volumes of data, and enjoys the challenge of highly complex technical contexts. He/she should be an expert with data modeling, ETL design and business intelligence tools and has hand-on knowledge on columnar databases. He/she is a self-starter, comfortable with ambiguity, able to think big and enjoys working in a fast-paced team. Responsibilities Design, build and own all the components of a high-volume data warehouse end to end. Build efficient data models using industry best practices and metadata for ad-hoc and pre-built reporting Provide wing-to-wing data engineering support for project lifecycle execution (design, execution and risk assessment) Interface with business customers, gathering requirements and delivering complete data & reporting solutions owning the design, development, and maintenance of ongoing metrics, reports, dashboards, etc. to drive key business decisions Continually improve ongoing reporting and analysis processes, automating or simplifying self-service support for customers Interface with other technology teams to extract, transform, and load (ETL) data from a wide variety of data sources Own the functional and nonfunctional scaling of software systems in your ownership area. Implement big data solutions for distributed computing. Willing to learn and develop strong skill set in AWS technologies Key job responsibilities As a DE on our team, you will be responsible for leading the data modelling, database design, and launch of some of the core data pipelines. You will have significant influence on our overall strategy by helping define the data model, drive the database design, and spearhead the best practices to delivery high quality products. A day in the life You are expected to do data modelling, database design, build data pipelines as per Amazon standards, design reviews, and supporting data privacy and security initiatives. You will attend regular stand-up meetings and provide your updates. You will keep an eye out for opportunities to improve the product or user experience and suggest those enhancements. You will participate in requirement grooming meetings to ensure the use cases we deliver are complete and functional. You will take your turn at on-call and own production operational maintenance. You will respond to customer issues and monitor databases for healthy state and performance. About The Team Our mission is to build applications which can solve challenges Global Payroll Operations teams face on daily basis, automate the tasks they perform manually, provide them seamless experience by integrating with other dependent systems, and eventually reduce Pay Defects and improve pay accuracy Basic Qualifications 3+ years of data engineering experience 4+ years of SQL experience Experience with data modeling, warehousing and building ETL pipelines Preferred Qualifications Experience with AWS technologies like Redshift, S3, AWS Glue, EMR, Kinesis, FireHose, Lambda, and IAM roles and permissions Experience with non-relational databases / data stores (object storage, document or key-value stores, graph databases, column-family databases) Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner. Company - ADCI HYD 13 SEZ Job ID: A3002941

Posted 10 hours ago

Apply

5.0 years

0 Lacs

Udaipur, Rajasthan, India

On-site

Linkedin logo

For quick Response, please fill out the form Job Application Form 34043 - Data Scientist - Senior I - Udaipur https://docs.google.com/forms/d/e/1FAIpQLSeBy7r7b48Yrqz4Ap6-2g_O7BuhIjPhcj-5_3ClsRAkYrQtiA/viewform 3–5 years of experience in Data Engineering or similar roles Strong foundation in cloud-native data infrastructure and scalable architecture design Build and maintain reliable, scalable ETL/ELT pipelines using modern cloud-based tools Design and optimize Data Lakes and Data Warehouses for real-time and batch processing Ingest, transform, and organize large volumes of structured and unstructured data Collaborate with analysts, data scientists, and backend engineers to define data needs Monitor, troubleshoot, and improve pipeline performance, cost-efficiency, and reliability Implement data validation, consistency checks, and quality frameworks Apply data governance best practices and ensure compliance with privacy and security standards Use CI/CD tools to deploy workflows and automate pipeline deployments Automate repetitive tasks using scripting, workflow tools, and scheduling systems Translate business logic into data logic while working cross-functionally Strong in Python and familiar with libraries like pandas and PySpark Hands-on experience with at least one major cloud provider (AWS, Azure, GCP) Experience with ETL tools like AWS Glue, Azure Data Factory, GCP Dataflow, or Apache NiFi Proficient with storage systems like S3, Azure Blob Storage, GCP Cloud Storage, or HDFS Familiar with data warehouses like Redshift, BigQuery, Snowflake, or Synapse Experience with serverless computing like AWS Lambda, Azure Functions, or GCP Cloud Functions Familiar with data streaming tools like Kafka, Kinesis, Pub/Sub, or Event Hubs Proficient in SQL, and knowledge of relational (PostgreSQL, MySQL) and NoSQL (MongoDB, DynamoDB) databases Familiar with big data frameworks like Hadoop or Apache Spark Experience with orchestration tools like Apache Airflow, Prefect, GCP Workflows, or ADF Pipelines Familiarity with CI/CD tools like GitLab CI, Jenkins, Azure DevOps Proficient with Git, GitHub, or GitLab workflows Strong communication, collaboration, and problem-solving mindset Experience with data observability or monitoring tools (bonus points) Contributions to internal data platform development (bonus points) Comfort working in data mesh or distributed data ownership environments (bonus points) Experience building data validation pipelines with Great Expectations or similar tools (bonus points)

Posted 11 hours ago

Apply

5.0 - 10.0 years

0 Lacs

Mumbai, Maharashtra, India

On-site

Linkedin logo

BUSINESS ANALYST / PROGRAM MANAGER No. of Positions:01 Experience Required:5 to 10 years Position Type:C2C Duration of Contract: 6 to 9 Months Working Location: Mumbai onsite Budget: Open Position Overview: We are seeking a versatile and detail-oriented Business Analyst / Program Manager to support NBFC our client in accelerating the documentation of existing Qlik Sense dashboards and Regulatory Reports. The ideal candidate will play a pivotal role in coordinating and executing documentation efforts, ensuring technical and functional clarity, and facilitating timely sign-offs. The role also includes engaging with stakeholders and presenting regular progress updates, risks, and milestones. Key Responsibilities: · Develop and maintain comprehensive technical and functional documentation for existing Qlik Sense dashboards and Regulatory Reports/dumps. · Ensure traceability of data sources, transformation logic, and business rules across platforms including Snowflake, Talend, Oracle, and Qlik. · Capture source field definitions and data lineage, acknowledging that reports may pull data from multiple systems. · Stakeholder Engagement:: Conduct walkthrough sessions with Business Analysts and end-users to validate documentation. · Gather feedback and obtain formal sign-off to ensure alignment with business requirements. · Weekly Presentations: Prepare and present updates on the progress & Highlight progress, risks, and upcoming milestones in weekly team meetings. · Experience: · 5 to 10 years of experience in Business Analysis and/or Program Management. · Proven experience in documenting Business Analysis and Intelligence reports and data platforms. · Strong knowledge of Qlik Sense, Snowflake or Talend, and Oracle · Excellent problem-solving, analytical, and communication skills. · Strong interpersonal and collaboration abilities across cross-functional teams. Educational Qualifications: Bachelor’s degree in computer science or Information Technology or Engineering or Business Administration , or a related field is required. Good-to-Have skills: Indian NBFC Context 1. Domain & Functional Understanding NBFC Lending Lifecycle Knowledge : Loan origination, underwriting, disbursement, servicing, collections Understanding RBI regulations, NBFC classification (deposit-taking, non-deposit), Fair Practices Code, KYC/AML norms. Exposure to Loan Products : Personal loans, gold loans, SME loans, vehicle finance, digital lending Credit Bureau Data Handling : Familiarity with CIBIL/CRIF reports & score interpretation Retail & SME Lending Processes : Familiarity with unsecured & secured lending, underwriting, credit scoring models. Collections & Recovery Practices : Knowledge of early-stage and late-stage collection workflows. Digital Lending Models : Insight into co-lending, BNPL (Buy Now Pay Later), DSA/DST models, and fintech partnerships. 2. Data & Analytics Skills Advanced Excel : Data cleaning, formulas, pivot tables, macros for loan and risk reports SQL (Intermediate to Advanced) : Writing efficient queries to pull customer, loan, payment, and delinquency data Data Visualization Tools : Power BI, Tableau, Qlik — useful for dashboards on collections, portfolio quality, etc. Data Profiling & Quality Checks : Detecting missing, duplicate, or inconsistent loan/customer records 3. Tools & Technologies Experience with NBFC Systems : LOS (Loan Origination System), LMS (Loan Management System), and Core NBFC Platforms like FinnOne, MyFin, BRNet, Vymo, Oracle Fusion, OGL, Kiya, Fincorp, Hotfoot (sanction), Core Banking Systems, or in-house NBFC systems. ETL Knowledge (Good to Have) : Talend, Informatica, SSIS for understanding backend data flows Python (Basic Scripting) : For EDA (exploratory data analysis) or automating reports — pandas, NumPy CRM/Collection Tools Insight : Salesforce, LeadSquared, or collection platforms like Credgenics API/Data Integration : Understanding of how NBFCs integrate with credit bureaus (CIBIL, CRIF), Aadhar, CKYC, bank statement analysers, etc. 4. Business Metrics & Reporting Understanding of NBFC KPIs : NPA %, Portfolio at Risk (PAR), Days Past Due (DPD) buckets, Collection Efficiency, Bounce Rate Regulatory Reporting Awareness : RBI-mandated MIS reports or returns (even if not the owner, knowing the data helps) 5. Compliance, Data Privacy & Risk Data Privacy Sensitivity : Understanding DPDP Act compliance for customer data handling Risk Scoring Models (Good to Have) : Working knowledge of inputs used in internal credit models 6. Project & Communication Skills Agile Tools : JIRA, Confluence for sprint planning & requirement documentation formats Strong Data Storytelling : Presenting insights and trends clearly to product, risk, or operations teams Collaboration with Data Engineering Teams : Translate business needs into data requirements, schemas, and validations Stakeholder Communication : Ability to work with risk, compliance, IT, operations, and business heads. Change Management Readiness : Supporting adoption of new systems/processes. Presentation & Reporting : Converting findings into clear, impactful reports or dashboards. Bonus Skills (Niche but Valuable) Working with UPI/NACH/Account Aggregator datasets Knowledge of data lakes or cloud-based analytics stacks (e.g. Snowflake, AWS Redshift) Hands-on with A/B testing or loan decisioning analytics Familiarity with AI/ML usage in loan decisioning .

Posted 12 hours ago

Apply

7.0 years

0 Lacs

Mumbai, Maharashtra, India

On-site

Linkedin logo

FanCode is India’s premier sports destination committed to giving fans a highly personalised experience across content and merchandise for a wide variety of sports. Founded by sports industry veterans Yannick Colaco and Prasana Krishnan in March 2019, FanCode has over 100 million users. It has partnered with domestic, international sports leagues and associations across multiple sports. In content, FanCode offers interactive live streaming for sports with industry-first subscription formats like Match, Bundle and Tour Passes, along with monthly and annual subscriptions at affordable prices. Through FanCode Shop, it also offers fans a wide range of sports merchandise for sporting teams, brands and leagues across the world. Technology @ FanCode We have one mission: Create a platform for all sports fans. Built by sports fans for sports fans, we cover Sports Live Video Streaming, Live Scores & Commentary, Video On Demand, Player Analytics, Fantasy Research, News, and e-Commerce. We’re at the beginning of our story and growing at an incredible pace. Our tech stack is hosted on AWS and GCP, leveraging Amazon EC2, CloudFront, Lambda, API Gateway, Google Compute Engine, Cloud Functions, and Google Cloud Storage. We use a microservices-based architecture built on Java, Node.js , Python, PHP, Redis, MySQL, Cassandra, and Elasticsearch to serve product features. Our data-driven team utilizes Python and other big data technologies for Machine Learning and Predictive Analytics, along with Kafka, Spark, Redshift, and BigQuery to keep improving FanCode's performance. Your Role As the Director of DevOps at FanCode, you will lead and shape the vision, strategy, and execution of our Core Infra team. This team is responsible for maintaining a stable, secure, and scalable environment that empowers our talented developers to innovate and deliver exceptional experiences to sports fans globally. Key Responsibilities Strategic Leadership: Develop and execute the DevOps strategy, ensuring alignment with FanCode's overall business objectives. Shape and communicate the vision for cloud-native infrastructure, driving scalability, reliability, and performance. Mentor and lead the DevOps team, fostering a culture of innovation, collaboration, and continuous learning. Infrastructure and Automation: Oversee the deployment of resilient Cloud Architectures using Infrastructure as Code (IaC) tools like Terraform and Ansible. Design and implement tools for service-oriented architecture, including service discovery, config management, and container orchestration. Lead the development of a Compute Orchestration Platform using EC2 and GCE, driving automation and self-service infrastructure. Be hands-on with Kubernetes (GKE/EKS), container orchestration, and scalable infrastructure solutions. CI/CD, Performance, and Security: Strategize and implement CI/CD pipelines using tools like Jenkins, ArgoCD, and Github Actions to optimize deployment workflows. Champion best practices for networking and security at scale, ensuring compliance and data integrity. Implement monitoring solutions using DataDog, NewRelic, CloudWatch, ELK Stack, and Prometheus/Grafana for proactive performance management. Collaboration and Cross-Functional Alignment: Collaborate with Engineering, QA, Product, and Data Science teams to streamline product development and release cycles. Promote knowledge sharing and infrastructure best practices across all technical teams, ensuring consistent standards. Innovation and Continuous Improvement: Stay abreast of the latest DevOps trends and technologies, driving continuous improvement initiatives. Evaluate and recommend cutting-edge tools and practices to enhance FanCode's infrastructure and processes. Must Haves: 7+ years of relevant experience in DevOps, with at least 3 years in a leadership role. Strong experience with AWS and GCP cloud platforms, with hands-on expertise in Infrastructure as Code (IaC). Proficiency in scripting languages like Python or Bash. Deep hands-on expertise with Kubernetes (GKE/EKS) and container orchestration. Proven ability to lead by example, getting hands-on when needed while driving a strong DevOps culture. Strong background in CI/CD pipeline automation, performance monitoring, and cloud security best practices. Excellent troubleshooting skills, with the ability to dive deep into system-level issues. Excellent communication and leadership skills, with the ability to influence stakeholders at all levels. Good to Haves: Experience with CI/CD tools like Jenkins, ArgoCD, and Github Actions. Knowledge of monitoring solutions such as DataDog, NewRelic, CloudWatch, ELK Stack, Prometheus/Grafana. Hands-on experience with DevOps automation tools like Terraform and Ansible. AWS, GCP, or Azure certification(s). Previous experience in fast-paced startup environments. Passion for sports and a desire to impact millions of sports fans globally. Dream Sports is India’s leading sports technology company with brands such as Dream11 , the world’s largest fantasy sports platform, FanCode , a premier digital sports platform that personalizes content and commerce for all sports fans, DreamSetGo , a sports experiences platform, and DreamPay , a payment solutions provider. It has founded the Dream Sports Foundation to help and champion sportspeople and is an active member of the Federation of Indian Fantasy Sports , the nodal body for the Fantasy Sports industry in India. Founded in 2008 by Harsh Jain and Bhavit Sheth, Dream Sports is always working on its mission to ‘Make Sports Better’ and is located in Mumbai. Dream Sports has been featured as a ‘Great Places to Work’ by the Great Place to Work Institute for four consecutive years. It is also the only sports tech company among India’s best companies to work for in 2021. For more information: https://dreamsports.group/ About FanCode: FanCode is India’s premier digital sports destination committed to giving all fans a highly personalized experience across Content, Community, and Commerce. Founded by sports industry veterans Yannick Colaco and Prasana Krishnan in March 2019, FanCode offers interactive live streaming, sports fan merchandise (FanCode Shop), fast interactive live match scores, in-depth live commentary, fantasy sports data and statistics (Fantasy Research Hub), expert fantasy tips, sports news and much more. FanCode has partnered with both domestic and international sports leagues and associations across multiple sports such as three of the top American Leagues - MLB, NFL, and NBA, FIVB, West Indies Cricket Board, Bangladesh Premier League, Caribbean Premier League, Bundesliga, and I-League. Dream Sports India’s leading Sports Technology company is the parent company of FanCode with brands such as Dream11 also in its portfolio. FanCode has already amassed over 2 crore+ app installs and won the “Best Sports Startup” award at FICCI India Sports Awards 2019. Get the FanCode App: iOS | Android Website: www.fancode.com FanCode Shop : www.shop.fancode.com Dream Sports is India’s leading sports technology company with 250 million users, housing brands such as Dream11 , the world’s largest fantasy sports platform, FanCode , India’s digital sports destination, and DreamSetGo , a sports experiences platform. Dream Sports is based in Mumbai and has a workforce of close to 1,000 ‘Sportans’. Founded in 2008 by Harsh Jain and Bhavit Sheth, Dream Sports’ vision is to ‘Make Sports Better’ for fans through the confluence of sports and technology. For more information: https://dreamsports.group/

Posted 13 hours ago

Apply

7.0 - 11.0 years

30 - 35 Lacs

Bengaluru

Work from Office

Naukri logo

1. The resource should have knowledge on Data Warehouse and Data Lake 2. Should aware of building data pipelines using Pyspark 3. Should be strong in SQL skills 4. Should have exposure to AWS environment and services like S3, EC2, EMR, Athena, Redshift etc 5. Good to have programming skills in Python

Posted 14 hours ago

Apply

5.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

Job Title: Data Engineer Location: Chennai, India Experience: 5+ years Work Mode: Full-time (9am-6:30pm), In-office (Monday to Friday) Department: Asign Data Sciences About Us At Asign, we are revolutionizing the art sector with our innovative digital solutions. We are a passionate and dynamic startup dedicated to enhancing the art experience through technology. Join us in creating cutting-edge products that empower artists and art enthusiasts worldwide. Role Overview We are looking for an experienced Data Engineer with a strong grasp of ELT architecture and experience to help us build and maintain robust data pipelines. This is a hands-on role for someone passionate about structured data, automation, and scalable infrastructure. The ideal candidate will be responsible for sourcing data, ingesting, transforming, storing, and making data accessible and reliable for data analysis, machine learning, and reporting. You will play a key role in maintaining and evolving our data architecture and ensuring that our data flows efficiently and securely. Key Responsibilities ● Design, develop, and maintain efficient and scalable ELT data pipelines. ● Work closely with the data science and backend teams to understand data needs and transform raw inputs into structured datasets. ● Integrate multiple data sources including APIs, web pages, spreadsheets, and databases into a central warehouse. ● Monitor, test, and continuously improve data flows for reliability and performance. ● Create documentation and establish best practices for data governance, lineage, and quality. ● Collaborate with product and tech teams to plan data models that support business and AI/ML applications. Required Skills ● Minimum 5 years of hands-on experience in data engineering. ● Solid understanding and experience with ELT pipelines and modern data stack tools. ● Practical knowledge of one or more orchestrators (Dagster, Airflow, Prefect, etc.). ● Proficiency in Python and SQL. ● Experience working with APIs and data integration from multiple sources. ● Familiarity with one or more cloud data warehouses (e.g., Snowflake, BigQuery, Redshift). ● Strong problem-solving and debugging skills. Qualifications: Must-have: ● Bachelor’s/Master’s degree in Computer Science, Engineering, Statistics, or a related field ● Proven experience (5+ years) in data engineering, data integration, and data management ● Hands-on experience in data sourcing tools and frameworks (e.g. Scrapy, Beautiful Soup, Selenium, Playwright) ● Proficiency in Python and SQL for data manipulation and pipeline development ● Experience with cloud-based data platforms (AWS, Azure, or GCP) and data warehouse tools (e.g. Redshift, BigQuery, Snowflake) ● Familiarity with workflow orchestration tools (e.g. Airflow, Prefect, Dagster) ● Strong understanding of relational and non-relational databases (PostgreSQL, MongoDB, etc.) ● Solid understanding of data modeling, ETL best practices, and data governance principles ● Systems knowledge and experience working with Docker. ● Strong and creative problem-solving skills and the ability to think critically about data engineering solutions. ● Effective communication and collaboration skills ● Ability to work independently and as part of a team in a fast-paced, dynamic environment. Good-to-have: ● Experience working with APIs and third-party data sources ● Familiarity with version control (Git) and CI/CD processes ● Exposure to basic machine learning concepts and working with data science teams ● Experience handling large datasets and working with distributed data systems Why Join Us? ● Innovative Environment: Be part of a forward-thinking team that is dedicated to pushing the boundaries of art and technology. ● Career Growth: Opportunities for professional development and career advancement. ● Creative Freedom: Work in a role that values creativity and encourages new ideas. ● Company Culture: Enjoy a dynamic, inclusive, and supportive work environment.

Posted 15 hours ago

Apply

0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

I am thrilled to share an exciting opportunity with one of our esteemed clients! 🚀 Join me in exploring new horizons and unlocking potential. If you're ready for a challenge and growth,. Exp: 4-12yrs Location: Hyderabad Immediate joiner only, WFO Mandatory skills: SQL, Python, Pyspark, Databricks (strong in core databricks), AWS (AWS is mandate) JD: Manage and optimize cloud infrastructure (AWS, Databricks) for data storage, processing, and compute resources, ensuring seamless data operations. Implement data quality checks, validation rules, and transformation logic to ensure the accuracy, consistency, and reliability of data. Integrate data from multiple sources, ensuring data is accurately transformed and stored in optimal formats (e.g., Delta Lake, Redshift, S3). Automate data workflows using tools like Airflow, Databricks APIs, and other orchestration technologies to streamline data ingestion, processing, and reporting tasks. Regards R Usha usha@livecjobs.com

Posted 17 hours ago

Apply

7.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Linkedin logo

About The Role We are seeking a seasoned Engineering Manager to lead the development of our end-to-end Video Telematics and Cloud Analytics Platform. This role demands a strong technical leader with experience across embedded systems, AI/ML, computer vision, cloud infrastructure, and data analytics. You will be responsible for driving technical execution across multidisciplinary teams overseeing everything from edge-based video analytics to cloud-hosted dashboards and insights. Key Responsibilities Platform Leadership : Own and drive the development of the complete video telematics ecosystem, covering edge AI, video processing, cloud platform, and data insights. Architect and oversee scalable and secure cloud infrastructure for video streaming, data storage, OTA updates, analytics, and alerting systems. Define the data pipeline architecture for collecting, storing, analyzing, and visualizing video, sensor, and telematics data. Cross-functional Engineering Management Lead and coordinate cross-functional teams Cloud/backend team for infrastructure, APIs, analytics dashboards, and system scalability. AI/ML and CV teams for DMS/ADAS model deployment and real-time inference. Hardware/Embedded teams for firmware integration, camera tuning, and SoC optimization. Collaborate with product and business stakeholders to define the roadmap, features, and timelines. Data Analytics Work closely with data scientists to build meaningful insights and reports from video and sensor data. Drive implementation of real-time analytics, fleet safety scoring, and predictive insights using telematics data. Operational Responsibilities Own product quality and system reliability across all components. Support product rollout, monitoring, and updates in production. Manage resources, mentor engineers, and build a high-performing development team. Ensure adherence to industry standards for security, privacy (GDPR), and compliance (e.g., GSR, AIS-140). Requirements Must-Have : Bachelors or Masters in Computer Science, Electrical Engineering, or related field. 7+ years of experience in software/product engineering, with 2+ years in a technical leadership or management role. Deep understanding of cloud architecture, video pipelines, edge computing, and microservices. Proficient in AWS/GCP/Azure, Docker/Kubernetes, serverless computing, and RESTful API design. Solid grasp of AI/ML integration and computer vision workflows (model lifecycle, optimization, deployment). Experience in data pipelines, SQL/NoSQL databases, and analytics tools (e.g., Redshift, Snowflake, Grafana, Superset). Good-to-Have Prior work on automotive, fleet, or video telematics solutions. Experience with camera hardware (MIPI, ISP tuning), compression codecs (H.264/H.265), and event-based recording. Familiarity with telematics protocols (CAN, MQTT) and geospatial analytics. Working knowledge of data privacy regulations (GDPR, CCPA). (ref:hirist.tech)

Posted 20 hours ago

Apply

0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Position Overview We are seeking a talented and experienced Data Engineer with expertise in Apache Spark, Python / Java and distributed systems. The ideal candidate will be skilled in creating and managing data pipelines using AWS. Key Responsibilities Design, develop, and implement data pipelines for ingesting, transforming, and loading data at scale. Utilize Apache Spark for data processing and analysis. Utilize AWS services (S3, Redshift, EMR, Glue) to build and manage efficient data pipelines. Optimize data pipelines for performance and scalability, considering factors like partitioning, bucketing, and caching. Write efficient and maintainable Python code. Implement and manage distributed systems for data processing. Collaborate with cross-functional teams to understand data requirements and deliver optimal solutions. Ensure data quality and integrity throughout the data : Proven experience with Apache Spark and Python / Java. Strong knowledge of distributed systems. Proficiency in creating data pipelines with AWS. Excellent problem-solving and analytical skills. Ability to work independently and as part of a team. Bachelor's degree in Computer Science, Information Technology, or a related field (or equivalent experience). Proven experience in designing and developing data pipelines using Apache Spark and Python. Experience with distributed systems concepts (Hadoop, YARN) is a plus. In-depth knowledge of AWS cloud services for data engineering (S3, Redshift, EMR, Glue). Familiarity with data warehousing concepts (data modeling, ETL) is preferred. Strong programming skills in Python (Pandas, NumPy, Scikit-learn are a plus). Experience with data pipeline orchestration tools (Airflow, Luigi) is a plus. Excellent problem-solving and analytical skills. Strong communication and collaboration Qualifications : Experience with additional AWS services (e.g., AWS Glue, AWS Lambda, Amazon Redshift). Familiarity with data warehousing and ETL processes. Knowledge of data governance and best practices. Have a good understanding of the oops concept. Hands-on experience with SQL database design Experience with Python, SQL, and data visualization/exploration tools (ref:hirist.tech)

Posted 21 hours ago

Apply

7.0 years

0 Lacs

Gandhinagar, Gujarat, India

On-site

Linkedin logo

Key Responsibilities Lead and mentor a high-performing data pod composed of data engineers, data analysts, and BI developers. Design, implement, and maintain ETL pipelines and data workflows to support real-time and batch processing. Architect and optimize data warehouses for scale, performance, and security. Perform advanced data analysis and modeling to extract insights and support business decisions. Lead data science initiatives including predictive modeling, NLP, and statistical analysis. Manage and tune relational and non-relational databases (SQL, NoSQL) for availability and performance. Develop Power BI dashboards and reports for stakeholders across departments. Ensure data quality, integrity, and compliance with data governance and security standards. Work with cross-functional teams (product, marketing, ops) to turn data into strategy. Qualifications Required : PhD in Data Science, Computer Science, Engineering, Mathematics, or related field. 7+ years of hands-on experience across data engineering, data science, analysis, and database administration. Strong experience with ETL tools (e.g., Airflow, Talend, SSIS) and data warehouses (e.g., Snowflake, Redshift, BigQuery). Proficient in SQL, Python, and Power BI. Familiarity with modern cloud data platforms (AWS/GCP/Azure). Strong understanding of data modeling, data governance, and MLOps practices. Exceptional ability to translate business needs into scalable data solutions. (ref:hirist.tech)

Posted 21 hours ago

Apply

6.0 - 8.0 years

37 - 40 Lacs

Kochi, Hyderabad, Coimbatore

Work from Office

Naukri logo

Required Skills & Qualifications: 6+ years of experience in Informatica ETL Development with at least 2+ years in Informatica IICS. Strong expertise in IICS CDI, CAI, CDI-Elastic, Taskflows, and REST/SOAP API Integration. Experience in cloud platforms (AWS, Azure, GCP) and working with databases like Snowflake, Redshift, or Synapse. Proficiency in SQL, PL/SQL, and performance tuning techniques. Knowledge of PowerCenter migration to IICS is a plus. Hands-on experience with Data Quality, Data Governance, and Master Data Management (MDM) is desirable. Experience in developing and deploying APIs, microservices, and event-driven architectures. Strong problem-solving skills and the ability to work in an Agile/Scrum environment. Preferred Qualifications: Informatica IICS Certification (CDI or CAI) is a plus. Exposure to Python, PySpark, or Big Data technologies is an advantage. Experience with CI/CD pipelines, DevOps practices, and Terraform for cloud deployments.

Posted 1 day ago

Apply

3.0 years

0 Lacs

Coimbatore, Tamil Nadu, India

On-site

Linkedin logo

About the job: What makes Techjays an inspiring place to work At Techjays, we are driving the future of artificial intelligence with a bold mission to empower businesses worldwide by helping them build AI solutions that transform industries. As an established leader in the AI space, we combine deep expertise with a collaborative, agile approach to deliver impactful technology that drives meaningful change. Our global team consists of professionals who have honed their skills at leading companies such as Google, Akamai, NetApp, ADP, Cognizant Consulting, and Capgemini. With engineering teams across the globe, we deliver tailored AI software and services to clients ranging from startups to large-scale enterprises. Be part of a company that’s pushing the boundaries of digital transformation. At Techjays, you’ll work on exciting projects that redefine industries, innovate with the latest technologies, and contribute to solutions that make a real-world impact. Join us on our journey to shape the future with AI. We’re looking for a skilled Data Analytics Engineer to help us build scalable, data-driven solutions that support real-time decision-making and deep business insights. You’ll play a key role in designing and delivering analytics systems that leverage Power BI , Snowflake , and SQL , helping teams across the organization make data-informed decisions with confidence. Experience : 3 to 8 Years Primary Skills: Power BI / Tableau, SQL, Data Modeling, Data Warehousing, ETL/ELT Pipelines, AWS Glue, AWS Redshift, GCP BigQuery, Azure Data Factory, Cloud Data Pipelines, DAX, Data Visualization, Dashboard Development Secondary Skills: Python, dbt, Apache Airflow, Git, CI/CD, DevOps for Data, Snowflake, Azure Synapse, Data Governance, Data Lineage, Apache Beam, Data Catalogs, Basic Machine Learning Concepts Job Location: Coimbatore Key Responsibilities : Develop and maintain scalable, robust ETL/ELT data pipelines across structured and semi-structured data sources. Collaborate with data scientists, analysts, and business stakeholders to identify data requirements and transform them into efficient data models. Design and deliver interactive dashboards and reports using Power BI and Tableau. Implement data quality checks, lineage tracking, and monitoring solutions to ensure high reliability of data pipelines. Optimize SQL queries and BI reports for performance and scalability. Work with cloud-native tools in AWS (e.g., Glue, Redshift, S3), or GCP (e.g., BigQuery, Dataflow), or Azure (e.g., Data Factory, Synapse). Automate data integration and visualization workflows. Required Qualifications: Bachelor's or Master’s degree in Computer Science, Information Systems, Data Science, or a related field. 3+ years of experience in data engineering or data analytics roles. Proven experience with Power BI or Tableau – including dashboard design, DAX, calculated fields, and data blending. Proficiency in SQL and experience in data modeling and relational database design. Hands-on experience with data pipelines and orchestration using tools like Airflow, dbt, Apache Beam, or native cloud tools. Experience working with one or more cloud platforms – AWS, GCP, or Azure. Strong understanding of data warehousing concepts and tools such as Snowflake, BigQuery, Redshift, or Synapse. Preferred Skills: Experience with scripting in Python or Java for data processing. Familiarity with Git, CI/CD, and DevOps for data pipelines. Exposure to data governance, lineage, and catalog tools. Basic understanding of ML pipelines or advanced analytics is a plus. Soft Skills: Strong analytical and problem-solving skills. Excellent communication and collaboration abilities. Detail-oriented with a proactive approach to troubleshooting and optimization. What we offer: Best in packages Paid holidays and flexible paid time away Casual dress code & flexible working environment Medical Insurance covering self & family up to 4 lakhs per person. Work in an engaging, fast-paced environment with ample opportunities for professional development. Diverse and multicultural work environment Be part of an innovation-driven culture that provides the support and resources needed to succeed.

Posted 1 day ago

Apply

Exploring Redshift Jobs in India

The job market for redshift professionals in India is growing rapidly as more companies adopt cloud data warehousing solutions. Redshift, a powerful data warehouse service provided by Amazon Web Services, is in high demand due to its scalability, performance, and cost-effectiveness. Job seekers with expertise in redshift can find a plethora of opportunities in various industries across the country.

Top Hiring Locations in India

  1. Bangalore
  2. Hyderabad
  3. Mumbai
  4. Pune
  5. Chennai

Average Salary Range

The average salary range for redshift professionals in India varies based on experience and location. Entry-level positions can expect a salary in the range of INR 6-10 lakhs per annum, while experienced professionals can earn upwards of INR 20 lakhs per annum.

Career Path

In the field of redshift, a typical career path may include roles such as: - Junior Developer - Data Engineer - Senior Data Engineer - Tech Lead - Data Architect

Related Skills

Apart from expertise in redshift, proficiency in the following skills can be beneficial: - SQL - ETL Tools - Data Modeling - Cloud Computing (AWS) - Python/R Programming

Interview Questions

  • What is Amazon Redshift and how does it differ from traditional databases? (basic)
  • How does data distribution work in Amazon Redshift? (medium)
  • Explain the difference between SORTKEY and DISTKEY in Redshift. (medium)
  • How do you optimize query performance in Amazon Redshift? (advanced)
  • What is the COPY command in Redshift used for? (basic)
  • How do you handle large data sets in Redshift? (medium)
  • Explain the concept of Redshift Spectrum. (advanced)
  • What is the difference between Redshift and Redshift Spectrum? (medium)
  • How do you monitor and manage Redshift clusters? (advanced)
  • Can you describe the architecture of Amazon Redshift? (medium)
  • What are the best practices for data loading in Redshift? (medium)
  • How do you handle concurrency in Redshift? (advanced)
  • Explain the concept of vacuuming in Redshift. (basic)
  • What are Redshift's limitations and how do you work around them? (advanced)
  • How do you scale Redshift clusters for performance? (medium)
  • What are the different node types available in Amazon Redshift? (basic)
  • How do you secure data in Amazon Redshift? (medium)
  • Explain the concept of Redshift Workload Management (WLM). (advanced)
  • What are the benefits of using Redshift over traditional data warehouses? (basic)
  • How do you optimize storage in Amazon Redshift? (medium)
  • What is the difference between Redshift and Redshift Spectrum? (medium)
  • How do you troubleshoot performance issues in Amazon Redshift? (advanced)
  • Can you explain the concept of columnar storage in Redshift? (basic)
  • How do you automate tasks in Redshift? (medium)
  • What are the different types of Redshift nodes and their use cases? (basic)

Conclusion

As the demand for redshift professionals continues to rise in India, job seekers should focus on honing their skills and knowledge in this area to stay competitive in the job market. By preparing thoroughly and showcasing their expertise, candidates can secure rewarding opportunities in this fast-growing field. Good luck with your job search!

cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies