Home
Jobs

1983 Redshift Jobs

Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
Filter
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

3.0 - 8.0 years

20 - 35 Lacs

Bengaluru

Hybrid

Naukri logo

Shift: (GMT+05:30) Asia/Kolkata (IST) What do you need for this opportunity Must have skills required: Golang, Python, Java Requirements : We are looking for a Backend Engineer to help us through the next level of technology changes needed to revolutionize Healthcare for India. We are seeking individuals who can understand real-world scenarios and come up with scalable tech solutions for millions of patients to make healthcare accessible. The role comes with a good set of challenges to solve, and offers an opportunity to build new systems that will be rolled out at scale. You have 4 to 7 years or more of software development experience with expertise in designing and implementing high-performance web applications. Very strong understanding and experience with any of Java, Scala, GoLang, Python. Experience writing optimized queries in relational databases like Mysql, redshift / Postgres. You have exposure to basic data engineering concepts like data pipeline, hadoop or spark Write clean and testable code. You love to build platforms that enable other teams to build on top of. Some of challenges we solve include: Clinical decision support Early Detection: Digitally assist doctors in identifying high-risk patients for early intervention Track & Advice: Analyze patients vitals/test values across visits to assist doctors in personalizing chronic care. Risk Prevention: Assist doctors in monitoring the progression of chronic disease by drawing attention to additional symptoms and side effects. EMR (Electronic Medical Records): Clinical software to write prescriptions and manage clinical records AI-powered features Adapts to doctors practice: Learns from doctors prescribing preferences and provides relevant auto-fill recommendations for faster prescriptions. Longitudinal patient journey: AI analyses the longitudinal journey of patients to assist doctors in early detection. Medical language processing: AI-driven automatic digitization of printed prescriptions and test reports. Core platform Pharma advertising platform to doctors at the moment of truth Real world evidence to generate market insights for B2B consumption Virtual Store Online Pharmacy + Diagnostic solutions helping patients with one-click order Technologies we use : Distributed Tech: Kafka, Elastic search Databases: MongoDB, RDS Cloud platform: AWS Languages: Go-lang, python, PHP UI Tech: React, react native Caching: Redis Big Data: AWS Athena, Redshift APM: NewRelic Responsibilities : Develop well testable and reusable services with structured, granular and well-commented code. Contribute in the area of API building, data pipeline setup, and new tech initiatives needed for a core platform Acclimate to new technologies and situations as per the company demands and requirements with the vision of providing best customer experience Meet expected deliverables and quality standards with every release Collaborate with teams to design, develop, test and refine deliverables that meet the objectives Perform code reviews and implement improvement plans Additional Responsibilities : Pitch-in during the phases of design and architectural solutions of Business problems. Organize, lead and motivate the development team to meet expected timelines and quality standards across releases. Actively contribute to development process improvement plans. Assist peers by code reviews and juniors through mentoring. Must have Skills : Sound understanding of Computer Science fundamentals including Data Structures and Space and Time complexity. Excellent problem solving skills Solid understanding of any of the modern Object oriented programming languages (like Java, Ruby or Python) and or Functional languages (like Scala, GoLang) Understanding of MPP (Massive parallel processing) and frameworks like Spark Experience working with Databases (RDBMS - Mysql, Redshift etc, NoSQL - Couchbase / MongoDB / Cassandra etc). Experience working with open source libraries and frameworks. Strong hold on versioning tools Git/Bitbucket. Good to have Skills : Knowledge of MicroServices architecture. Have experience working with Kafka Experience or Exposure to ORM frameworks (like ActiveRecord, SQLAlchemy etc). Working knowledge of full text search (like ElasticSearch, Solr etc).

Posted 2 hours ago

Apply

2.0 - 4.0 years

4 - 8 Lacs

Bengaluru

Hybrid

Naukri logo

About the Role Love deep data? Love discussing solutions instead of problems? Then you could be our next Data Scientist. In a nutshell, your primary responsibility will be enhancing the productivity and utilization of the generated data. Other things you will do are: Work closely with the business stakeholders Transform scattered pieces of information into valuable data Share and present your valuable insights with peers What You Will Do Develop models and run experiments to infer insights from hard data Improve our product usability and identify new growth opportunities Understand reseller preferences to provide them with the most relevant products Designing discount programs to help our resellers sell more Help resellers better recognize end-customer preferences to improve their revenue Use data to identify bottlenecks that will help our suppliers meet their SLA requirements Model seasonal demand to predict key organizational metrics Mentor junior data scientists in the team What You Will Need Bachelor's/Master's degree in computer science (or similar degrees) 2-4 years of experience as a Data Scientist in a fast-paced organization, preferably B2C Familiarity with Neural Networks, Machine Learning, etc. Familiarity with tools like SQL, R, Python, etc. Strong understanding of Statistics and Linear Algebra Strong understanding of hypothesis/model testing and ability to identify common model testing errors Experience designing and running A/B tests and drawing insights from them Proficiency in machine learning algorithms Excellent analytical skills to fetch data from reliable sources to generate accurate insights Experience in tech and product teams is a plus Bonus points for: Experience in working on personalization or other ML problems Familiarity with Big Data tech stacks like Apache Spark, Hadoop, Redshift, etc.

Posted 2 hours ago

Apply

3.0 - 6.0 years

20 - 30 Lacs

Bengaluru

Work from Office

Naukri logo

Job Title: Data Engineer II (Python, SQL) Experience: 3 to 6 years Location: Bangalore, Karnataka (Work from office, 5 days a week) Role: Data Engineer II (Python, SQL) As a Data Engineer II, you will work on designing, building, and maintaining scalable data pipelines. Youll collaborate across data analytics, marketing, data science, and product teams to drive insights and AI/ML integration using robust and efficient data infrastructure. Key Responsibilities: Design, develop and maintain end-to-end data pipelines (ETL/ELT). Ingest, clean, transform, and curate data for analytics and ML usage. Work with orchestration tools like Airflow to schedule and manage workflows. Implement data extraction using batch, CDC, and real-time tools (e.g., Debezium, Kafka Connect). Build data models and enable real-time and batch processing using Spark and AWS services. Collaborate with DevOps and architects for system scalability and performance. Optimize Redshift-based data solutions for performance and reliability. Must-Have Skills & Experience: 3+ years in Data Engineering or Data Science with strong ETL and pipeline experience. Expertise in Python and SQL . Strong experience in Data Warehousing , Data Lakes , Data Modeling , and Ingestion . Working knowledge of Airflow or similar orchestration tools. Hands-on with data extraction techniques like CDC , batch-based, using Debezium, Kafka Connect, AWS DMS . Experience with AWS Services : Glue, Redshift, Lambda, EMR, Athena, MWAA, SQS, etc. Knowledge of Spark or similar distributed systems. Experience with queuing/messaging systems like SQS , Kinesis , RabbitMQ .

Posted 3 hours ago

Apply

5.0 - 15.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

TCS Hiring for HPC Engineer (High- Performance Computing Engineer)_PAN India Experience: 5 to 15 Years Only Job Location: PAN India TCS Hiring for HPC Engineer (High- Performance Computing Engineer)_PAN India Required Technical Skill Set: AWS Parallel Cluster, R Workbench, Batch, ECS, Kubernetes AWS Services: EC2, RDS, S3, RedShift, FSX, EFS, IAM, Batch, Cloud Formation Desired Competencies (Technical/Behavioral Competency) Primary Focus : Developing and running Bioinformatic workflows/pipelines leveraging and managing WDL engines like miniWDL, Slurm and R on AWS cloud utilizing technologies like AWS Parallel Cluster, R Workbench, Batch, ECS, Kubernetes etc. Technical Skills : AWS Services: EC2, RDS, S3, RedShift, FSX, EFS, IAM, Batch, Cloud Formation etc. Python & bash scripting Experience with Slurm (Configuring Slurm partitions, converting standalone jobs to Slurm etc.) Experience with R Workbench & Package manager – Install & configure R Workbench with scalable backend such as Slurm, Kubernetes etc. Experience with Dockers Key Responsibilities : Developing and running Bioinformatic workflows/pipelines leveraging and managing WDL engines like miniWDL, Slurm and R on AWS cloud utilizing technologies like AWS Parallel Cluster, R Workbench, Batch, ECS, Kubernetes etc. Good-to-Have: 1. Should have knowledge on Dockers ECS, RDS, S3 2. Good Knowledge in developing and running Bioinformatic workflows/pipelines leveraging and managing WDL engines like miniWDL, Slurm Kind Regards, Priyankha M

Posted 6 hours ago

Apply

7.0 - 12.0 years

22 - 25 Lacs

India

On-site

GlassDoor logo

TECHNICAL ARCHITECT Key Responsibilities 1. Designing technology systems: Plan and design the structure of technology solutions, and work with design and development teams to assist with the process. 2. Communicating: Communicate system requirements to software development teams, and explain plans to developers and designers. They also communicate the value of a solution to stakeholders and clients. 3. Managing Stakeholders: Work with clients and stakeholders to understand their vision for the systems. Should also manage stakeholder expectations. 4. Architectural Oversight: Develop and implement robust architectures for AI/ML and data science solutions, ensuring scalability, security, and performance. Oversee architecture for data-driven web applications and data science projects, providing guidance on best practices in data processing, model deployment, and end-to-end workflows. 5. Problem Solving: Identify and troubleshoot technical problems in existing or new systems. Assist with solving technical problems when they arise. 6. Ensuring Quality: Ensure if systems meet security and quality standards. Monitor systems to ensure they meet both user needs and business goals. 7. Project management: Break down project requirements into manageable pieces of work, and organise the workloads of technical teams. 8. Tool & Framework Expertise: Utilise relevant tools and technologies, including but not limited to LLMs, TensorFlow, PyTorch, Apache Spark, cloud platforms (AWS, Azure, GCP), Web App development frameworks and DevOps practices. 9. Continuous Improvement: Stay current on emerging technologies and methods in AI, ML, data science, and web applications, bringing insights back to the team to foster continuous improvement. Technical Skills 1. Proficiency in AI/ML frameworks such as TensorFlow, PyTorch, Keras, and scikit-learn for developing machine learning and deep learning models. 2. Knowledge or experience working with self-hosted or managed LLMs. 3. Knowledge or experience with NLP tools and libraries (e.g., SpaCy, NLTK, Hugging Face Transformers) and familiarity with Computer Vision frameworks like OpenCV and related libraries for image processing and object recognition. 4. Experience or knowledge in back-end frameworks (e.g., Django, Spring Boot, Node.js, Express etc.) and building RESTful and GraphQL APIs. 5. Familiarity with microservices, serverless, and event-driven architectures. Strong understanding of design patterns (e.g., Factory, Singleton, Observer) to ensure code scalability and reusability. 6. Proficiency in modern front-end frameworks such as React, Angular, or Vue.js, with an understanding of responsive design, UX/UI principles, and state management (e.g., Redux) 7. In-depth knowledge of SQL and NoSQL databases (e.g., PostgreSQL, MongoDB, Cassandra), as well as caching solutions (e.g., Redis, Memcached). 8. Expertise in tools such as Apache Spark, Hadoop, Pandas, and Dask for large-scale data processing. 9. Understanding of data warehouses and ETL tools (e.g., Snowflake, BigQuery, Redshift, Airflow) to manage large datasets. 10. Familiarity with visualisation tools (e.g., Tableau, Power BI, Plotly) for building dashboards and conveying insights. 11. Knowledge of deploying models with TensorFlow Serving, Flask, FastAPI, or cloud-native services (e.g., AWS SageMaker, Google AI Platform). 12. Familiarity with MLOps tools and practices for versioning, monitoring, and scaling models (e.g., MLflow, Kubeflow, TFX). 13. Knowledge or experience in CI/CD, IaC and Cloud Native toolchains. 14. Understanding of security principles, including firewalls, VPC, IAM, and TLS/SSL for secure communication. 15. Knowledge of API Gateway, service mesh (e.g., Istio), and NGINX for API security, rate limiting, and traffic management. Experience Required Technical Architect with 7 - 12 years of experience Salary 22-25 LPA Job Types: Full-time, Permanent Pay: ₹2,200,000.00 - ₹2,500,000.00 per year Location Type: In-person Work Location: In person

Posted 6 hours ago

Apply

4.0 - 7.0 years

0 Lacs

Hyderābād

On-site

GlassDoor logo

Zenoti provides an all-in-one, cloud-based software solution for the beauty and wellness industry. Our solution allows users to seamlessly manage every aspect of the business in a comprehensive mobile solution: online appointment bookings, POS, CRM, employee management, inventory management, built-in marketing programs and more. Zenoti helps clients streamline their systems and reduce costs, while simultaneously improving customer retention and spending. Our platform is engineered for reliability and scale and harnesses the power of enterprise-level technology for businesses of all sizes Zenoti powers more than 30,000 salons, spas, medspas and fitness studios in over 50 countries. This includes a vast portfolio of global brands, such as European Wax Center, Hand & Stone, Massage Heights, Rush Hair & Beauty, Sono Bello, Profile by Sanford, Hair Cuttery, CorePower Yoga and TONI&GUY. Our recent accomplishments include surpassing a $1 billion unicorn valuation, being named Next Tech Titan by GeekWire, raising an $80 million investment from TPG, ranking as the 316th fastest-growing company in North America on Deloitte's 2020 Technology Fast 500™. We are also proud to be recognized as a Great Place to Work CertifiedTM for 2021-2022 as this reaffirms our commitment to empowering people to feel good and find their greatness. To learn more about Zenoti visit: https://www.zenoti.com Our products are built on Windows .NET and SQL Server and managed in AWS. Our web Ux stack is built on jQuery and some areas use AngularJS. Our middle tier is in C# and we build our infrastructure on an extensive set of Restful APIs. We build native iOS and Android apps, and are starting to experiment with Flutter and Dart. For select infrastructure components we use Python extensively, and use Tableau for analytics dashboards. We use Redshift, Aurora, Redis Elasticache, Lambda, and other AWS products to build and manage our complete service, moving towards server-less components. We deal with billions of API calls, millions of records in databases, and terabytes of data to be managed with all services we build that have to run 24x7 at 99.99% availability. What will I be doing? Design, develop, test, release and maintain components of Zenoti Collaborate with a team of PM, DEV, and QA to release features Work in a team following agile development practices (SCRUM) Build usable software, released at high quality, runs at scale and is adopted by customers Learn to scale your features to handle 2x ~ 4x growth every year and manage code that has to deal with millions of records, and terabytes of data Release new features into production every month, and get real feedback from thousands of customers to refine your designs Be proud of what you work on, obsess about the quality of the work you produce What skills do I need? 4 to 7 years of experience in designing and developing applications on the Microsoft stack Strong background in building web applications Strong experience in HTML, JavaScript, CSS, jQuery, .NET/IIS with C# Proficient in working with Microsoft SQL Server Experience in developing web applications using Angular/Flutter/Dart a plus Strong logical, analytical and problem-solving skills Excellent communication skills Can work in a fast-paced, ever-changing, startup environment Why Zenoti? Be part of an innovative company that is revolutionizing the wellness and beauty industry. Work with a dynamic and diverse team that values collaboration, creativity, and growth. Opportunity to lead impactful projects and help shape the global success of Zenoti's platform. Attractive compensation. Medical coverage for yourself and your immediate family. Access to regular yoga, meditation, breathwork, and stress management sessions. We also include your family in benefit awareness initiatives. Regular social activities, and opportunities to give back through social work and community initiatives. Zenoti provides equal employment opportunities to all employees and applicants for employment and prohibits discrimination and harassment of any type without regard to race, color, religion, age, sex, national origin, disability status, genetics, protected veteran status, sexual orientation, gender identity or expression, or any other characteristic protected by federal, state, or local laws. This policy applies to all terms and conditions of employment, including recruiting, hiring, placement, promotion, termination, layoff, recall, transfer, leaves of absence, compensation and training.

Posted 6 hours ago

Apply

3.0 years

6 - 8 Lacs

Hyderābād

On-site

GlassDoor logo

- 3+ years of analyzing and interpreting data with Redshift, Oracle, NoSQL etc. experience - Experience with data modeling, warehousing and building ETL pipelines - Experience with data visualization using Tableau, Quicksight, or similar tools - Experience writing complex SQL queries - Experience using SQL to pull data from a database or data warehouse and scripting experience (Python) to process data for modeling - Experience in Statistical Analysis packages such as R, SAS and Matlab Do you enjoy diving deep into data, building data models and developing business metrics to generate actionable insights? Are you looking for an opportunity to define end to end analytics roadmap, work with cross functional teams and leverage cutting edge modern technologies and cloud solutions to develop analytics products. DSP Analytics team has an exciting opportunity for a Business Intelligence Engineer (BIE) to improve Amazon’s Delivery Service Partner (DSP) program through impactful data solutions. The goal of Amazon’s DSP organization is to exceed the expectations of our customers by ensuring that their orders, no matter how large or small, are delivered as quickly, accurately, and cost effectively as possible. To meet this goal, Amazon is continually striving to innovate and provide best in class delivery experience through the introduction of pioneering new products and services in the last mile delivery space. We are looking for an innovative, highly-motivated and experienced BIE who can think holistically about problems to understand how systems work together to identify and execute both tactical and strategic projects. You will work closely with engineering teams, product managers, program managers and org leaders to deliver end-to-end data solutions aimed at continuously enhancing overall DSP performance and delivery quality. The business coverage is broad, and you will identify and prioritize what matters most for the business, quantify what is (or is not) working, invent and simplify the current process and develop self-serve data and reporting solutions. You should have excellent business and communication skills to be able to work with business owners to define roadmap, develop milestones, define key business questions, and build data-sets that answers those questions. The ideal candidate should have hands-on SQL and scripting language experience and excel in designing, implementing, and operating stable, scalable, low-cost solutions to flow data from production systems into the data warehouse and into end-user facing applications. Key job responsibilities - Lead the design, implementation, and delivery of BI solutions for the Sub-Same Day (SSD) DSP Performance. - Manage and execute entire projects from start to finish including stakeholder management, data gathering and manipulation, modeling, problem solving, and communication of insights and recommendations. - Extract, transform, and load data from many data sources using SQL, Scripting and other ETL tools. - Design, build, and maintain automated reporting, dashboards, and ongoing analysis to enable data driven decisions across our team and with partner teams. - Report key insight trends using statistical rigor to simplify and inform the larger team of noteworthy trends that impact the business. - Retrieve and analyze data using a broad set of Amazon’s data technologies (ex. Redshift, AWS S3, Amazon Internal Platforms/Solutions) and resources, knowing how, when, and which to use. - Earn the trust of your customers and stakeholders by continuing to constantly obsess over their business use cases and data needs, and helping them solve their problems by leveraging technology. - Work closely with business stakeholders and senior leadership team to review roadmap and contributing to business strategy and how they can leverage analytics for success. About the team We are the core Amazon DSP BI team with the vision to enable data, insights and science driven decision-making. We have exceptionally talented and fun loving team members. In our team, you will have the opportunity to dive deep into complex business and data problems, drive large scale technical solutions and raise the bar for operational excellence. We love to share ideas and learning with each other. We are a relatively new team and do not carry legacy operational burden. We believe in promoting and using ideas to disrupt the status quo. Per the internal transfers guidelines, please reach out to the hiring manager for an informational through the "Request Informational" button on the job page. Experience with AWS solutions such as EC2, DynamoDB, S3, and Redshift Experience in data mining, ETL, etc. and using databases in a business environment with large-scale, complex datasets Experience developing and presenting recommendations of new metrics allowing better understanding of the performance of the business Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner.

Posted 6 hours ago

Apply

5.0 - 15.0 years

0 Lacs

New Delhi, Delhi, India

On-site

Linkedin logo

TCS Hiring for HPC Engineer (High- Performance Computing Engineer)_PAN India Experience: 5 to 15 Years Only Job Location: PAN India TCS Hiring for HPC Engineer (High- Performance Computing Engineer)_PAN India Required Technical Skill Set: AWS Parallel Cluster, R Workbench, Batch, ECS, Kubernetes AWS Services: EC2, RDS, S3, RedShift, FSX, EFS, IAM, Batch, Cloud Formation Desired Competencies (Technical/Behavioral Competency) Primary Focus : Developing and running Bioinformatic workflows/pipelines leveraging and managing WDL engines like miniWDL, Slurm and R on AWS cloud utilizing technologies like AWS Parallel Cluster, R Workbench, Batch, ECS, Kubernetes etc. Technical Skills : AWS Services: EC2, RDS, S3, RedShift, FSX, EFS, IAM, Batch, Cloud Formation etc. Python & bash scripting Experience with Slurm (Configuring Slurm partitions, converting standalone jobs to Slurm etc.) Experience with R Workbench & Package manager – Install & configure R Workbench with scalable backend such as Slurm, Kubernetes etc. Experience with Dockers Key Responsibilities : Developing and running Bioinformatic workflows/pipelines leveraging and managing WDL engines like miniWDL, Slurm and R on AWS cloud utilizing technologies like AWS Parallel Cluster, R Workbench, Batch, ECS, Kubernetes etc. Good-to-Have: 1. Should have knowledge on Dockers ECS, RDS, S3 2. Good Knowledge in developing and running Bioinformatic workflows/pipelines leveraging and managing WDL engines like miniWDL, Slurm Kind Regards, Priyankha M

Posted 6 hours ago

Apply

0 years

3 - 4 Lacs

Mumbai

On-site

GlassDoor logo

Company Description Forbes Advisor i s a new initiative for consumers under the Forbes Marketplace umbrella that provides journalist- and expert-written insights, news and reviews on all things personal finance, health, business, and everyday life decisions. We believe in the power of entrepreneurial capitalism and use it on various platforms to ignite the conversations that drive systemic change in business, culture, and society. We celebrate success and are committed to using our megaphone to drive diversity, equity and inclusion. We are the world’s biggest business media brand and we consistently place in the top 20 of the most popular sites in the United States, in good company with brands like Netflix, Apple and Google. In short, we have a big platform and we use it responsibly. Job Description The Data Research Engineering Team is a brand new team with the purpose of managing data from acquisition to presentation, collaborating with other teams while also operating independently. Their responsibilities include acquiring and integrating data, processing and transforming it, managing databases, ensuring data quality, visualizing data, automating processes, working with relevant technologies, and ensuring data governance and compliance. They play a crucial role in enabling data-driven decision-making and meeting the organization's data needs. A typical day in the life of a Data Research Engineer- Team Lead will involve guiding team members through code standards, optimization techniques, and best practices in debugging and testing. They oversee the development and consistent application of testing protocols, including unit, integration, and performance testing, ensuring a high standard of code quality across the team. They work closely with engineers, offering technical mentorship in areas like Git version control, task tracking, and documentation processes, as well as advanced Python and database practices. Responsibilities Technical Mentorship and Code Quality: Guide and mentor team members on coding standards, optimization techniques, and debugging. Conduct thorough code reviews, provide constructive feedback, and enforce code quality standards to ensure maintainable and efficient code. Testing and Quality Assurance Leadership: Develop, implement, and oversee rigorous testing protocols, including unit, integration, and performance testing, to guarantee the reliability and robustness of all projects. Advocate for automated testing and ensure comprehensive test coverage within the team. Process Improvement and Documentation: Establish and maintain high standards for version control, documentation, and task tracking across the team. Continuously refine these processes to enhance team productivity, streamline workflows, and ensure data quality. Hands-On Technical Support: Serve as the team’s primary resource for troubleshooting complex issues, particularly in Python, MySQL, GitKraken, and Knime. Provide on-demand support to team members, helping them overcome technical challenges and improve their problem-solving skills. High-Level Technical Mentorship: Provide mentorship in advanced technical areas, including architecture design, data engineering best practices, and advanced Python programming. Guide the team in building scalable and reliable data solutions. Cross-Functional Collaboration: Work closely with data scientists, product managers, and quality assurance teams to align on data requirements, testing protocols, and process improvements. Foster open communication across teams to ensure seamless integration and delivery of data solutions. Continuous Learning and Improvement: Stay updated with emerging data engineering methodologies and best practices, sharing relevant insights with the team. Drive a culture of continuous improvement, ensuring the team’s skills and processes evolve with industry standards. Data Pipelines: Design, implement, and maintain scalable data pipelines for efficient data transfer, cleaning, normalization, transformation, aggregation, and visualization to support production-level workloads. Big Data: Leverage distributed processing frameworks such as PySpark and Kafka to manage and process massive datasets efficiently. Cloud-Native Data Solutions: Develop and optimize workflows for cloud-native data solutions, including BigQuery, Databricks, Snowflake, Redshift, and tools like Airflow and AWS Glue. Regulations: Ensure compliance with regulatory frameworks like GDPR and implement robust data governance and security measures. Skills and Experience Experience : 8 + years Technical Proficiency: Programming: Expert-level skills in Python, with a strong understanding of code optimization, debugging, and testing. Object-Oriented Programming (OOP) Expertise: Strong knowledge of OOP principles in Python, with the ability to design modular, reusable, and efficient code structures. Experience in implementing OOP best practices to enhance code organization and maintainability. Data Management: Proficient in MySQL and database design, with experience in creating efficient data pipelines and workflows. Tools: Advanced knowledge of Git and GitKraken for version control, with experience in task management, ideally on GitHub. Familiarity with Knime or similar data processing tools is a plus. Testing and QA Expertise: Proven experience in designing and implementing testing protocols, including unit, integration, and performance testing. Ability to embed automated testing within development workflows. Process-Driven Mindset: Strong experience with process improvement and documentation, particularly for coding standards, task tracking, and data management protocols. Leadership and Mentorship: Demonstrated ability to mentor and support junior and mid-level engineers, with a focus on fostering technical growth and improving team cohesion. Experience leading code reviews and guiding team members in problem-solving and troubleshooting. Problem-Solving Skills: Ability to handle complex technical issues and serve as a key resource for team troubleshooting. Expertise in guiding others through debugging and technical problem-solving. Strong Communication Skills: Effective communicator capable of aligning cross-functional teams on project requirements, technical standards, and data workflows. Adaptability and Continuous Learning: A commitment to staying updated with the latest in data engineering, coding practices, and tools, with a proactive approach to learning and sharing knowledge within the team. Data Pipelines: Comprehensive expertise in building and optimizing data pipelines, including data transfer, transformation, and visualization, for real-world applications. Distributed Systems: Strong knowledge of distributed systems and big data tools such as PySpark and Kafka. Data Warehousing: Proficiency with modern cloud data warehousing platforms (BigQuery, Databricks, Snowflake, Redshift) and orchestration tools (Airflow, AWS Glue). Regulations: Demonstrated understanding of regulatory compliance requirements (e.g., GDPR) and best practices for data governance and security in enterprise settings Perks: Day off on the 3rd Friday of every month (one long weekend each month) Monthly Wellness Reimbursement Program to promote health well-being Monthly Office Commutation Reimbursement Program Paid paternity and maternity leaves Qualifications Educational Background: Bachelor’s or Master’s degree in Computer Science, Data Engineering, or a related field. Equivalent experience in data engineering roles will also be considered. Additional Information All your information will be kept confidential according to EEO guidelines.

Posted 6 hours ago

Apply

2.0 - 3.0 years

7 Lacs

Mumbai

On-site

GlassDoor logo

Job Title: Tableau Developer Experience: 2-3 Years Location: Mumbai, India About the Role: We are seeking a highly motivated and skilled Tableau Developer with years of proven experience to join our dynamic team in Mumbai. In this role, you will be instrumental in transforming complex data into insightful and interactive dashboards and reports using Tableau. You will work closely with business stakeholders, data analysts, and other technical teams to understand reporting requirements, develop effective data visualizations, and contribute to data-driven decision-making within the organization. Roles and Responsibilities: Dashboard Development: Design, develop, and maintain compelling and interactive Tableau dashboards and reports that meet business requirements and enhance user experience. Create various types of visualizations, including charts, graphs, maps, and tables, to effectively communicate data insights. Implement advanced Tableau features such as calculated fields, parameters, sets, groups, and Level of Detail (LOD) expressions to create sophisticated analytics. Optimize Tableau dashboards for performance and scalability, ensuring quick loading times and efficient data retrieval. Data Sourcing and Preparation: Connect to various data sources (e.g., SQL Server, Oracle, Excel, cloud-based data platforms like AWS Redshift, Google BigQuery, etc.) and extract, transform, and load (ETL) data for reporting purposes. Perform data analysis, validation, and cleansing to ensure the accuracy, completeness, and consistency of data used in reports. Collaborate with data engineers and data analysts to understand data structures, identify data gaps, and ensure data quality. Requirements Gathering & Collaboration: Work closely with business users, stakeholders, and cross-functional teams to gather and understand reporting and analytical requirements. Translate business needs into technical specifications and develop effective visualization solutions. Participate in discussions and workshops to refine requirements and propose innovative reporting approaches. Troubleshooting and Support: Diagnose and resolve issues related to data accuracy, dashboard performance, and report functionality. Provide ongoing support and maintenance for existing Tableau dashboards and reports. Assist end-users with Tableau-related queries and provide training as needed. Documentation and Best Practices: Create and maintain comprehensive documentation for Tableau dashboards, data sources, and development processes. Adhere to data visualization best practices and design principles to ensure consistency and usability across all reports. Contribute to code reviews and knowledge sharing within the team. Continuous Improvement: Stay up-to-date with the latest Tableau features, updates, and industry trends in data visualization and business intelligence. Proactively identify opportunities for improvement in existing reports and propose enhancements. Participate in an Agile development environment, adapting to changing priorities and contributing to sprint goals. Required Skills and Qualifications: Bachelor's degree in Computer Science, Information Systems, Data Science, or a related field. 2 years of hands-on experience as a Tableau Developer , with a strong portfolio of developed dashboards and reports. Proficiency in Tableau Desktop and Tableau Server (including publishing, managing permissions, and performance monitoring). Strong SQL skills for data extraction, manipulation, and querying from various databases. Solid understanding of data warehousing concepts, relational databases, and ETL processes. Familiarity with data visualization best practices and design principles. Excellent analytical and problem-solving skills with a keen eye for detail. Strong communication skills (verbal and written) with the ability to explain complex data insights to non-technical stakeholders. Ability to work independently and collaboratively in a team-oriented environment. Adaptability to changing business requirements and a fast-paced environment. Additional Qualifications: Experience with other BI tools (e.g., Power BI, Qlik Sense) is a plus. Familiarity with scripting languages like Python or R for advanced data manipulation and analytics. Knowledge of cloud data platforms (e.g., AWS, Azure, GCP). Experience with Tableau Prep for data preparation. Job Types: Full-time, Permanent Pay: Up to ₹750,000.00 per year Benefits: Health insurance Provident Fund Schedule: Day shift Monday to Friday Work Location: In person

Posted 6 hours ago

Apply

3.0 years

3 - 5 Lacs

Chennai

On-site

GlassDoor logo

integration Development: Design and implement integration solutions using MuleSoft Anypoint Platform for various enterprise applications, including ERP, CRM, and third-party systems. API Management: Develop and manage APIs using MuleSofts API Gateway, ensuring best practices for API design, security, and monitoring. MuleSoft Anypoint Studio: Develop, deploy, and monitor MuleSoft applications using Anypoint Studio and Anypoint Management Console. Data Transformation: Use MuleSofts DataWeave to transform data between various formats (XML, JSON, CSV, etc.) as part of integration solutions. Troubleshooting and Debugging: Provide support in troubleshooting and resolving integration issues and ensure the solutions are robust and scalable. Collaboration: Work closely with other developers, business analysts, and stakeholders to gather requirements, design, and implement integration solutions. Documentation: Create and maintain technical documentation for the integration solutions, including API specifications, integration architecture, and deployment processes. Best Practices: Ensure that the integrations follow industry best practices and MuleSofts guidelines for designing and implementing scalable and secure solutions. Required Qualifications: Bachelor degree in computer science, Information Technology, or a related field. 3+ years of experience in MuleSoft development and integration projects. Proficiency in MuleSoft Anypoint Platform, including Anypoint Studio, Anypoint Exchange, and Anypoint Management Console. Strong knowledge of API design and management, including REST, SOAP, and Web Services. Proficiency in DataWeave for data transformation. Hands-on experience with integration patterns and technologies such as JMS, HTTP/HTTPS, File, Database, and Cloud integrations. Experience with CI/CD pipelines and deployment tools such as Jenkins, Git, and Maven. Good understanding of cloud platforms (AWS, Azure, or GCP) and how MuleSoft integrates with cloud services. Excellent troubleshooting and problem-solving skills. Strong communication skills and the ability to work effectively in a team environment.Strong working knowledge of modern programming languages, ETL/Data Integration tools (preferably SnapLogic) and understanding of Cloud Concepts. SSL/TLS, SQL, REST, JDBC, JavaScript, JSON Has Strong hands-on experience in Snaplogic Design/Development. Has good working experience using various snaps for JDBC, SAP, Files, Rest, SOAP, etc. Good to have the ability to build complex mappings with JSON path expressions, and Python scripting. Good to have experience in ground plex and cloud plex integrations. Has Strong hands-on experience in Snaplogic Design/Development. Has good working experience using various snaps for JDBC, SAP, Files, Rest, SOAP, etc. Should be able to deliver the project by leading a team of the 6-8member team. Should have had experience in integration projects with heterogeneous landscapes. Good to have the ability to build complex mappings with JSON path expressions, flat files and cloud. Good to have experience in ground plex and cloud plex integrations. Experience in one or more RDBMS (Oracle, DB2, and SQL Server, PostgreSQL and RedShift) Real-time experience working in OLAP & OLTP database models (Dimensional models). About Virtusa Teamwork, quality of life, professional and personal development: values that Virtusa is proud to embody. When you join us, you join a team of 27,000 people globally that cares about your growth — one that seeks to provide you with exciting projects, opportunities and work with state of the art technologies throughout your career with us. Great minds, great potential: it all comes together at Virtusa. We value collaboration and the team environment of our company, and seek to provide great minds with a dynamic place to nurture new ideas and foster excellence. Virtusa was founded on principles of equal opportunity for all, and so does not discriminate on the basis of race, religion, color, sex, gender identity, sexual orientation, age, non-disqualifying physical or mental disability, national origin, veteran status or any other basis covered by appropriate law. All employment is decided on the basis of qualifications, merit, and business need.

Posted 6 hours ago

Apply

130.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Job Description Consumption Adoption Analyst The Opportunity Based in Hyderabad, join a global healthcare biopharma company and be part of a 130- year legacy of success backed by ethical integrity, forward momentum, and an inspiring mission to achieve new milestones in global healthcare. Be part of an organisation driven by digital technology and data-backed approaches that support a diversified portfolio of prescription medicines, vaccines, and animal health products. Drive innovation and execution excellence. Be a part of a team with passion for using data, analytics, and insights to drive decision-making, and which creates custom software, allowing us to tackle some of the world's greatest health threats. Our Technology Centers focus on creating a space where teams can come together to deliver business solutions that save and improve lives. An integral part of our company’s IT operating model, Tech Centers are globally distributed locations where each IT division has employees to enable our digital transformation journey and drive business outcomes. These locations, in addition to the other sites, are essential to supporting our business and strategy. A focused group of leaders in each Tech Center helps to ensure we can manage and improve each location, from investing in growth, success, and well-being of our people, to making sure colleagues from each IT division feel a sense of belonging to managing critical emergencies. And together, we must leverage the strength of our team to collaborate globally to optimize connections and share best practices across the Tech Centers. Role Overview As the Consumption Adoption Analyst, you will focus on driving innovation through data analytics and insights, engaging in requirement gathering, and building business intelligence front ends while creating and managing semantic models. You will also conduct training sessions for end-users, developers, and administrators to ensure successful project implementation and user adoption. Your ability to collaborate within a matrix structure will be key as you report to both functional and product managers. What Will You Do In This Role Involves requirement gathering, discovery, estimation, and building business intelligence front ends and database-related back-end work. Delivers various types of trainings for end-users, developers, admins and conducts support activities for the projects delivered. Contribute to adoption of existing Technology Engineering community practices within the products where you work. Work within a matrix organizational structure, reporting to both the functional manager and the Product manager. What Should You Have Bachelor’s degree in information technology, Computer Science or any Technology stream. 4+ years of hands-on experience working with the following or similar technologies – Power BI, ThoughtSpot, Data Modeling, SQL knowledge, and cloud data ware/lake houses like Databricks, AWS Redshift, etc. Communication Effective communication at different levels, including supporting development work, hypercare, trainings, user onboardings, and requirements gathering with senior people. Our technology teams operate as business partners, proposing ideas and innovative solutions that enable new organizational capabilities. We collaborate internationally to deliver services and solutions that help everyone be more productive and enable innovation. Who We Are We are known as Merck & Co., Inc., Rahway, New Jersey, USA in the United States and Canada and MSD everywhere else. For more than a century, we have been inventing for life, bringing forward medicines and vaccines for many of the world's most challenging diseases. Today, our company continues to be at the forefront of research to deliver innovative health solutions and advance the prevention and treatment of diseases that threaten people and animals around the world. What We Look For Imagine getting up in the morning for a job as important as helping to save and improve lives around the world. Here, you have that opportunity. You can put your empathy, creativity, digital mastery, or scientific genius to work in collaboration with a diverse group of colleagues who pursue and bring hope to countless people who are battling some of the most challenging diseases of our time. Our team is constantly evolving, so if you are among the intellectually curious, join us—and start making your impact today. #HYIT2025 Current Employees apply HERE Current Contingent Workers apply HERE Search Firm Representatives Please Read Carefully Merck & Co., Inc., Rahway, NJ, USA, also known as Merck Sharp & Dohme LLC, Rahway, NJ, USA, does not accept unsolicited assistance from search firms for employment opportunities. All CVs / resumes submitted by search firms to any employee at our company without a valid written search agreement in place for this position will be deemed the sole property of our company. No fee will be paid in the event a candidate is hired by our company as a result of an agency referral where no pre-existing agreement is in place. Where agency agreements are in place, introductions are position specific. Please, no phone calls or emails. Employee Status Regular Relocation VISA Sponsorship Travel Requirements Flexible Work Arrangements Hybrid Shift Valid Driving License Hazardous Material(s) Required Skills Client Counseling, Emerging Technologies, Operational Acceptance Testing (OAT), Product Management, Requirements Management, Solution Architecture, Stakeholder Relationship Management, Technical Advice, User Experience (UX) Design, Waterfall Model Preferred Skills Job Posting End Date 08/13/2025 A job posting is effective until 11 59 59PM on the day BEFORE the listed job posting end date. Please ensure you apply to a job posting no later than the day BEFORE the job posting end date. Requisition ID R350691

Posted 6 hours ago

Apply

5.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

To get the best candidate experience, please consider applying for a maximum of 3 roles within 12 months to ensure you are not duplicating efforts. Job Category Software Engineering Job Details About Salesforce We’re Salesforce, the Customer Company, inspiring the future of business with AI+ Data +CRM. Leading with our core values, we help companies across every industry blaze new trails and connect with customers in a whole new way. And, we empower you to be a Trailblazer, too — driving your performance and career growth, charting new paths, and improving the state of the world. If you believe in business as the greatest platform for change and in companies doing well and doing good – you’ve come to the right place. About The Company We’re Salesforce, the Customer Company, inspiring the future of business with AI + Data + CRM. Leading with our core values, we help companies across every industry blaze new trails and connect with customers in a whole new way. And, we empower you to be a Trailblazer, too — driving your performance and career growth, charting new paths, and improving the state of the world. If you believe in business as the greatest platform for change and in companies doing well and doing good – you’ve come to the right place. About The Team Join the Marketing AI/ML Algorithms and Applications team within Salesforce's Marketing organization. In this role, you'll have the opportunity to make an outsized impact on Salesforce's marketing initiatives, helping to promote our vast product portfolio to a global customer base, including 90% of the Fortune 500. By driving state-of-the-art data engineering solutions for our internal marketing platforms, you'll directly contribute to enhancing the effectiveness of Salesforce's marketing efforts. Your data engineering expertise will play a pivotal role in accelerating Salesforce's growth. This is a unique chance to apply your passion for data engineering to drive transformative business impact on a global scale, shaping the future of how Salesforce engages with potential and existing customers, and contributing to our continued innovation and industry leadership in the CRM space. We are seeking an experienced Senior Data Engineer to develop scalable data platforms and pipelines that enable marketing effectiveness measurement and customer targeting for AI/ML-driven marketing at Salesforce. In this role, you will work closely with our Data Science team to support the full lifecycle of model development, from data exploration to production deployment for our growing portfolio of production models. Responsibilities Design and implement scalable data platforms, pipelines, and features to support advanced statistical and machine learning techniques across batch and real-time processing scenarios Contribute to development and maintenance of Feature Stores to support ML algorithm development and deployment, ensuring efficient access to high-quality features for both marketing effectiveness and customer targeting models Build real-time and batch data processing systems, including ETL workflows, to integrate customer data from multiple sources (Snowflake, Data Cloud, etc.), ensuring data quality and accessibility for modeling purposes Collaborate with Analytics, Data Science, and Business teams to understand and meet their data needs for marketing effectiveness models (eg, Marketing Driven Pipe, Marketing Matured ACV) and customer targeting models (eg, Product Interest, Lead Scoring, Event Intelligence, Account Propensity) Implement data infrastructure to support model deployment, monitoring, and in-production tuning processes with output consumption through analytics platforms like Tableau Optimize data infrastructure for performance, scalability, reliability, and cost-efficiency Work with data stewards and custodians to ensure data quality, security, and governance, adhering to Salesforce's commitment to responsible and ethical use of data Share technical expertise and support junior team members Position Requirements 5+ years experience building enterprise-scale data platforms and pipelines for large-scale data sets in a cloud environment Strong knowledge in big data technologies such as SQL, Python, PySpark / Spark, Hive, Presto, Kafka Strong knowledge of data modeling techniques and tools, with emphasis on database design and ETL processes Strong experience with cloud platforms such as AWS, GCP, or Azure for data sourcing, supporting model development, and assisting with operationalization Experience with data engineering technologies such as dbt, Airflow, data warehouses (e.g., Snowflake, Redshift) and data lakes (e.g., S3, HDFS) Solid understanding of advanced statistical and machine learning techniques as to their data requirements Experience with customer data platforms (CDPs) and customer 360 initiatives Strong problem-solving and communication skills Passion for enabling data-driven marketing effectiveness measurement and customer targeting in B2B contexts Familiarity with Salesforce products and B2B customer data is advantageous This role offers an exciting opportunity to leverage your data engineering expertise to power data and AI-driven innovation for the world's #1 CRM provider, working closely with our Data Science team to unlock transformative value for our vast global customer base through AI/ML-driven marketing initiatives. Accommodations If you require assistance due to a disability applying for open positions please submit a request via this Accommodations Request Form. Posting Statement Salesforce is an equal opportunity employer and maintains a policy of non-discrimination with all employees and applicants for employment. What does that mean exactly? It means that at Salesforce, we believe in equality for all. And we believe we can lead the path to equality in part by creating a workplace that’s inclusive, and free from discrimination. Know your rights: workplace discrimination is illegal. Any employee or potential employee will be assessed on the basis of merit, competence and qualifications – without regard to race, religion, color, national origin, sex, sexual orientation, gender expression or identity, transgender status, age, disability, veteran or marital status, political viewpoint, or other classifications protected by law. This policy applies to current and prospective employees, no matter where they are in their Salesforce employment journey. It also applies to recruiting, hiring, job assignment, compensation, promotion, benefits, training, assessment of job performance, discipline, termination, and everything in between. Recruiting, hiring, and promotion decisions at Salesforce are fair and based on merit. The same goes for compensation, benefits, promotions, transfers, reduction in workforce, recall, training, and education.

Posted 6 hours ago

Apply

0 years

0 Lacs

Chennai

On-site

GlassDoor logo

Required skills: Strong working knowledge of modern programming languages, ETL/Data Integration tools (preferably SnapLogic) and understanding of Cloud Concepts. SSL/TLS, SQL, REST, JDBC, JavaScript, JSON Has Strong hands-on experience in Snaplogic Design/Development. Has good working experience using various snaps for JDBC, SAP, Files, Rest, SOAP, etc. Good to have the ability to build complex mappings with JSON path expressions, and Python scripting. Good to have experience in ground plex and cloud plex integrations. Has Strong hands-on experience in Snaplogic Design/Development. Has good working experience using various snaps for JDBC, SAP, Files, Rest, SOAP, etc. Should be able to deliver the project by leading a team of the 6-8member team. Should have had experience in integration projects with heterogeneous landscapes. Good to have the ability to build complex mappings with JSON path expressions, flat files and cloud. Good to have experience in ground plex and cloud plex integrations. Experience in one or more RDBMS (Oracle, DB2, and SQL Server, PostgreSQL and RedShift) Real-time experience working in OLAP & OLTP database models (Dimensional models). About Virtusa Teamwork, quality of life, professional and personal development: values that Virtusa is proud to embody. When you join us, you join a team of 27,000 people globally that cares about your growth — one that seeks to provide you with exciting projects, opportunities and work with state of the art technologies throughout your career with us. Great minds, great potential: it all comes together at Virtusa. We value collaboration and the team environment of our company, and seek to provide great minds with a dynamic place to nurture new ideas and foster excellence. Virtusa was founded on principles of equal opportunity for all, and so does not discriminate on the basis of race, religion, color, sex, gender identity, sexual orientation, age, non-disqualifying physical or mental disability, national origin, veteran status or any other basis covered by appropriate law. All employment is decided on the basis of qualifications, merit, and business need.

Posted 6 hours ago

Apply

5.0 years

15 - 24 Lacs

Bengaluru

On-site

GlassDoor logo

Job Title: Senior Data Engineer – Azure | ADF | Databricks | PySpark | AWS Location: Bangalore, Hyderabad, Chennai (Hybrid Mode) Experience Required: 5+ Years Notice Period: Immediate Job Description We are looking for a Senior Data Engineer who is passionate about designing and developing scalable data pipelines, optimizing data architecture, and working with advanced big data tools and cloud platforms. This is a great opportunity to be a key player in transforming data into meaningful insights by leveraging modern data engineering practices on Azure, AWS, and Databricks . You will be working with cross-functional teams including data scientists, analysts, and software engineers to deliver robust data solutions. The ideal candidate will be technically strong in Azure Data Factory, PySpark, Databricks, and AWS services , and will have experience in building end-to-end ETL workflows and driving business impact through data. Key Responsibilities Design, build, and maintain scalable and reliable data pipelines and ETL workflows Implement data ingestion and transformation using Azure Data Factory (ADF) and Azure Databricks (PySpark) Work across multiple data platforms including Azure, AWS, Snowflake, and Redshift Collaborate with data scientists and business teams to understand data needs and deliver solutions Optimize data storage, processing, and retrieval for performance and cost-effectiveness Develop data quality checks and monitoring frameworks for pipeline health Ensure data governance, security, and compliance with industry standards Lead code reviews, set data engineering standards, and mentor junior team members Propose and evaluate new tools and technologies for continuous improvement Must-Have Skills Strong programming skills in Python , SQL , or Scala Azure Data Factory , Azure Databricks , Synapse Analytics Hands-on with PySpark , Spark, Hadoop, Hive Experience with cloud platforms (Azure preferred; AWS/GCP acceptable) Data Warehousing: Snowflake , Redshift , BigQuery Strong ETL/ELT pipeline development experience Workflow orchestration tools such as Airflow , Prefect , or Luigi Excellent problem-solving, debugging, and communication skills Nice to Have Experience with real-time streaming tools (Kafka, Flink, Spark Streaming) Exposure to data governance tools and regulations (GDPR, HIPAA) Familiarity with ML model integration into data pipelines Containerization and CI/CD exposure: Docker, Git, Kubernetes (basic) Experience with Vector databases and unstructured data handling Technical Environment Programming: Python, Scala, SQL Big Data Tools: Spark, Hadoop, Hive Cloud Platforms: Azure (ADF, Databricks, Synapse), AWS (S3, Glue, Lambda), GCP Data Warehousing: Snowflake, Redshift, BigQuery Databases: PostgreSQL, MySQL, MongoDB, Cassandra Orchestration: Apache Airflow, Prefect, Luigi Tools: Git, Docker, Azure DevOps, CI/CD pipelines Soft Skills Strong analytical thinking and problem-solving abilities Excellent verbal and written communication Collaborative team player with leadership qualities Self-motivated, organized, and able to manage multiple projects Education & Certifications Bachelor’s or Master’s Degree in Computer Science, IT, Engineering, or equivalent Cloud certifications (e.g., Microsoft Azure Data Engineer, AWS Big Data) are a plus Key Result Areas (KRAs) Timely delivery of high-performance data pipelines Quality of data integration and governance compliance Business team satisfaction and data readiness Proactive optimization of data processing workloads Key Performance Indicators (KPIs) Pipeline uptime and performance metrics Reduction in overall data latency Zero critical issues in production post-release Stakeholder satisfaction score Number of successful integrations and migrations Job Types: Full-time, Permanent Pay: ₹1,559,694.89 - ₹2,441,151.11 per year Benefits: Provident Fund Schedule: Day shift Supplemental Pay: Performance bonus Application Question(s): What is your notice period in days? Experience: Azure Data Factory, Azure Databricks, Synapse Analytics: 5 years (Required) Python, SQL, or Scala: 5 years (Required) Work Location: In person

Posted 6 hours ago

Apply

5.0 - 15.0 years

0 Lacs

Kochi, Kerala, India

On-site

Linkedin logo

TCS Hiring for HPC Engineer (High- Performance Computing Engineer)_PAN India Experience: 5 to 15 Years Only Job Location: PAN India TCS Hiring for HPC Engineer (High- Performance Computing Engineer)_PAN India Required Technical Skill Set: AWS Parallel Cluster, R Workbench, Batch, ECS, Kubernetes AWS Services: EC2, RDS, S3, RedShift, FSX, EFS, IAM, Batch, Cloud Formation Desired Competencies (Technical/Behavioral Competency) Primary Focus : Developing and running Bioinformatic workflows/pipelines leveraging and managing WDL engines like miniWDL, Slurm and R on AWS cloud utilizing technologies like AWS Parallel Cluster, R Workbench, Batch, ECS, Kubernetes etc. Technical Skills : AWS Services: EC2, RDS, S3, RedShift, FSX, EFS, IAM, Batch, Cloud Formation etc. Python & bash scripting Experience with Slurm (Configuring Slurm partitions, converting standalone jobs to Slurm etc.) Experience with R Workbench & Package manager – Install & configure R Workbench with scalable backend such as Slurm, Kubernetes etc. Experience with Dockers Key Responsibilities : Developing and running Bioinformatic workflows/pipelines leveraging and managing WDL engines like miniWDL, Slurm and R on AWS cloud utilizing technologies like AWS Parallel Cluster, R Workbench, Batch, ECS, Kubernetes etc. Good-to-Have: 1. Should have knowledge on Dockers ECS, RDS, S3 2. Good Knowledge in developing and running Bioinformatic workflows/pipelines leveraging and managing WDL engines like miniWDL, Slurm Kind Regards, Priyankha M

Posted 6 hours ago

Apply

7.0 - 12.0 years

15 - 25 Lacs

Thiruvananthapuram

Work from Office

Naukri logo

Key Responsibilities 1. Designing technology systems: Plan and design the structure of technology solutions, and work with design and development teams to assist with the process. 2. Communicating: Communicate system requirements to software development teams, and explain plans to developers and designers. They also communicate the value of a solution to stakeholders and clients. 3. Managing Stakeholders: Work with clients and stakeholders to understand their vision for the systems. Should also manage stakeholder expectations. 4. Architectural Oversight: Develop and implement robust architectures for AI/ML and data science solutions, ensuring scalability, security, and performance. Oversee architecture for data-driven web applications and data science projects, providing guidance on best practices in data processing, model deployment, and end-to-end workflows. 5. Problem Solving: Identify and troubleshoot technical problems in existing or new systems. Assist with solving technical problems when they arise. 6. Ensuring Quality: Ensure if systems meet security and quality standards. Monitor systems to ensure they meet both user needs and business goals. 7. Project management: Break down project requirements into manageable pieces of work, and organise the workloads of technical teams. 8. Tool & Framework Expertise: Utilise relevant tools and technologies, including but not limited to LLMs, TensorFlow, PyTorch, Apache Spark, cloud platforms (AWS, Azure, GCP), Web App development frameworks and DevOps practices. 9. Continuous Improvement: Stay current on emerging technologies and methods in AI, ML, data science, and web applications, bringing insights back to the team to foster continuous improvement. Technical Skills 1. Proficiency in AI/ML frameworks such as TensorFlow, PyTorch, Keras, and scikit-learn for developing machine learning and deep learning models. 2. Knowledge or experience working with self-hosted or managed LLMs. 3. Knowledge or experience with NLP tools and libraries (e.g., SpaCy, NLTK, Hugging Face Transformers) and familiarity with Computer Vision frameworks like OpenCV and related libraries for image processing and object recognition. 4. Experience or knowledge in back-end frameworks (e.g., Django, Spring Boot, Node.js, Express etc.) and building RESTful and GraphQL APIs. 5. Familiarity with microservices, serverless, and event-driven architectures. Strong understanding of design patterns (e.g., Factory, Singleton, Observer) to ensure code scalability and reusability. 6. Proficiency in modern front-end frameworks such as React, Angular, or Vue.js, with an understanding of responsive design, UX/UI principles, and state management (e.g., Redux) 7. In-depth knowledge of SQL and NoSQL databases (e.g., PostgreSQL, MongoDB, Cassandra), as well as caching solutions (e.g., Redis, Memcached). 8. Expertise in tools such as Apache Spark, Hadoop, Pandas, and Dask for large-scale data processing. 9. Understanding of data warehouses and ETL tools (e.g., Snowflake, BigQuery, Redshift, Airflow) to manage large datasets. 10. Familiarity with visualisation tools (e.g., Tableau, Power BI, Plotly) for building dashboards and conveying insights. 11. Knowledge of deploying models with TensorFlow Serving, Flask, FastAPI, or cloud-native services (e.g., AWS SageMaker, Google AI Platform). 12. Familiarity with MLOps tools and practices for versioning, monitoring, and scaling models (e.g., MLflow, Kubeflow, TFX). 13. Knowledge or experience in CI/CD, IaC and Cloud Native toolchains. 14. Understanding of security principles, including firewalls, VPC, IAM, and TLS/SSL for secure communication. 15. Knowledge of API Gateway, service mesh (e.g., Istio), and NGINX for API security, rate limiting, and traffic management.

Posted 6 hours ago

Apply

5.0 - 15.0 years

0 Lacs

Mumbai, Maharashtra, India

On-site

Linkedin logo

TCS Hiring for HPC Engineer (High- Performance Computing Engineer)_PAN India Experience: 5 to 15 Years Only Job Location: PAN India TCS Hiring for HPC Engineer (High- Performance Computing Engineer)_PAN India Required Technical Skill Set: AWS Parallel Cluster, R Workbench, Batch, ECS, Kubernetes AWS Services: EC2, RDS, S3, RedShift, FSX, EFS, IAM, Batch, Cloud Formation Desired Competencies (Technical/Behavioral Competency) Primary Focus : Developing and running Bioinformatic workflows/pipelines leveraging and managing WDL engines like miniWDL, Slurm and R on AWS cloud utilizing technologies like AWS Parallel Cluster, R Workbench, Batch, ECS, Kubernetes etc. Technical Skills : AWS Services: EC2, RDS, S3, RedShift, FSX, EFS, IAM, Batch, Cloud Formation etc. Python & bash scripting Experience with Slurm (Configuring Slurm partitions, converting standalone jobs to Slurm etc.) Experience with R Workbench & Package manager – Install & configure R Workbench with scalable backend such as Slurm, Kubernetes etc. Experience with Dockers Key Responsibilities : Developing and running Bioinformatic workflows/pipelines leveraging and managing WDL engines like miniWDL, Slurm and R on AWS cloud utilizing technologies like AWS Parallel Cluster, R Workbench, Batch, ECS, Kubernetes etc. Good-to-Have: 1. Should have knowledge on Dockers ECS, RDS, S3 2. Good Knowledge in developing and running Bioinformatic workflows/pipelines leveraging and managing WDL engines like miniWDL, Slurm Kind Regards, Priyankha M

Posted 6 hours ago

Apply

5.0 - 15.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

TCS Hiring for HPC Engineer (High- Performance Computing Engineer)_PAN India Experience: 5 to 15 Years Only Job Location: PAN India TCS Hiring for HPC Engineer (High- Performance Computing Engineer)_PAN India Required Technical Skill Set: AWS Parallel Cluster, R Workbench, Batch, ECS, Kubernetes AWS Services: EC2, RDS, S3, RedShift, FSX, EFS, IAM, Batch, Cloud Formation Desired Competencies (Technical/Behavioral Competency) Primary Focus : Developing and running Bioinformatic workflows/pipelines leveraging and managing WDL engines like miniWDL, Slurm and R on AWS cloud utilizing technologies like AWS Parallel Cluster, R Workbench, Batch, ECS, Kubernetes etc. Technical Skills : AWS Services: EC2, RDS, S3, RedShift, FSX, EFS, IAM, Batch, Cloud Formation etc. Python & bash scripting Experience with Slurm (Configuring Slurm partitions, converting standalone jobs to Slurm etc.) Experience with R Workbench & Package manager – Install & configure R Workbench with scalable backend such as Slurm, Kubernetes etc. Experience with Dockers Key Responsibilities : Developing and running Bioinformatic workflows/pipelines leveraging and managing WDL engines like miniWDL, Slurm and R on AWS cloud utilizing technologies like AWS Parallel Cluster, R Workbench, Batch, ECS, Kubernetes etc. Good-to-Have: 1. Should have knowledge on Dockers ECS, RDS, S3 2. Good Knowledge in developing and running Bioinformatic workflows/pipelines leveraging and managing WDL engines like miniWDL, Slurm Kind Regards, Priyankha M

Posted 6 hours ago

Apply

6.0 years

0 Lacs

Gurgaon, Haryana, India

On-site

Linkedin logo

Description Want to build at the cutting edge of immersive shopping experiences? The Visual Innovation Team (VIT) is at the center of all advanced visual and immersive content at Amazon. We're pioneering VR and AR shopping, CGI, and GenAI. We are looking for a Design Tecnologist who will help drive innovation in this space who understands technical problems the team may face from an artistic perspective and provide creative technical solutions. This role is for if you want to be a part of: Partnering with world-class creatives and scientists to drive innovation in content creation Developing and expanding Amazon’s VIT Virtual Production workflow. Building one of the largest content libraries on the planet Driving the success and adoption of emerging experiences across Amazon Key job responsibilities We are looking for a Design Technologist with a specialty in workflow automation using novel technologies like Gen-AI and CV. You will prototype and deliver creative solutions to the technical problems related to Amazon visuals. The right person will bring an implicit understanding of the balance needed between design, technology and creative professionals — helping scale video content creation within Amazon by enabling our teams -- to work smarter, not harder. Design Technologists In This Role Will Act as a bridge between creative and engineering disciplines to solve multi-disciplinary problems Work directly with videographers and studio production to develop semi-automated production workflows Collaborate with other tech artists and engineers to build and maintain a centralized suite of creative workflows and tooling Work with creative leadership to research, prototype and implement the latest industry trends that expand our production capabilities and improve efficiency A day in the life As a Design Technologist a typical day will include but is not limited to coding and development of tools, workflows, and automation to improve the creative crew's experience and increase productivity. This position will be focused on in house video creation, with virtual production and Gen-Ai workflows. You'll collaborate with production teams, observing, empathizing, and prototyping novel solutions. The ideal candidate is observant, creative, curious, and empathetic, understanding that problems often have multiple approaches. Basic Qualifications 6+ years of front-end technologist, engineer, or UX prototyper experience Have coding samples in front end programming languages Have an available online portfolio Experience developing visually polished, engaging, and highly fluid UX prototypes Experience collaborating with UX, Product, and technical partners Preferred Qualifications Knowledge of databases and AWS database services: ElasticSearch, Redshift, DynamoDB Experience with machine learning (ML) tools and methods Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner. Company - ASSPL - Haryana - C77 Job ID: A2799984

Posted 6 hours ago

Apply

5.0 - 15.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

TCS Hiring for HPC Engineer (High- Performance Computing Engineer)_PAN India Experience: 5 to 15 Years Only Job Location: PAN India TCS Hiring for HPC Engineer (High- Performance Computing Engineer)_PAN India Required Technical Skill Set: AWS Parallel Cluster, R Workbench, Batch, ECS, Kubernetes AWS Services: EC2, RDS, S3, RedShift, FSX, EFS, IAM, Batch, Cloud Formation Desired Competencies (Technical/Behavioral Competency) Primary Focus : Developing and running Bioinformatic workflows/pipelines leveraging and managing WDL engines like miniWDL, Slurm and R on AWS cloud utilizing technologies like AWS Parallel Cluster, R Workbench, Batch, ECS, Kubernetes etc. Technical Skills : AWS Services: EC2, RDS, S3, RedShift, FSX, EFS, IAM, Batch, Cloud Formation etc. Python & bash scripting Experience with Slurm (Configuring Slurm partitions, converting standalone jobs to Slurm etc.) Experience with R Workbench & Package manager – Install & configure R Workbench with scalable backend such as Slurm, Kubernetes etc. Experience with Dockers Key Responsibilities : Developing and running Bioinformatic workflows/pipelines leveraging and managing WDL engines like miniWDL, Slurm and R on AWS cloud utilizing technologies like AWS Parallel Cluster, R Workbench, Batch, ECS, Kubernetes etc. Good-to-Have: 1. Should have knowledge on Dockers ECS, RDS, S3 2. Good Knowledge in developing and running Bioinformatic workflows/pipelines leveraging and managing WDL engines like miniWDL, Slurm Kind Regards, Priyankha M

Posted 6 hours ago

Apply

5.0 years

0 Lacs

India

On-site

Linkedin logo

Embark on an exhilarating career path in software development with 3Pillar Global! We're thrilled to invite you to become a part of our team and prepare for an exciting journey ahead. At 3Pillar Global, we're dedicated to harnessing state-of-the-art technologies to transform industries through data-driven decision-making. As a Senior Business Intelligence Engineer, you'll play a pivotal role in our dynamic team, actively shaping groundbreaking projects that redefine data analytics for our clients. Your contributions will empower them with a strategic edge in their respective industries.If your passion for data analytics solutions that make a real-world impact, consider this your pass to the captivating world of Data Science and Engineering! 🔮🌐 Minimum Qualification Demonstrated expertise with a minimum of 5+ years of experience as BI engineer or similar role Advanced SQL skills and experience with relational databases and database design. Strong data visualization skills using tools like Power BI, Tableau, AWS QuickSight etc Knowledge on Connecting multiple data sources, importing data, and transforming data for Business Intelligence Experience working with cloud Data Warehouse solutions like Snowflake, Redshift, BigQuery etc. Experience in organizing KPIs and dashboards as well as reading metric trends to evaluate business performance Experience in gathering BI requirements and structuring specifications to produce business-relevant datasets, reports and dashboards Passion for democratizing data effectively to the wider organization for self-service Keenness to innovate and improve the BI process beyond basic reporting Strong communication skills, able to collaborate effectively with both the business and the technical side Proven ability to work independently and with a team Ability to guide other BI engineers

Posted 6 hours ago

Apply

5.0 - 15.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Linkedin logo

TCS Hiring for HPC Engineer (High- Performance Computing Engineer)_PAN India Experience: 5 to 15 Years Only Job Location: PAN India TCS Hiring for HPC Engineer (High- Performance Computing Engineer)_PAN India Required Technical Skill Set: AWS Parallel Cluster, R Workbench, Batch, ECS, Kubernetes AWS Services: EC2, RDS, S3, RedShift, FSX, EFS, IAM, Batch, Cloud Formation Desired Competencies (Technical/Behavioral Competency) Primary Focus : Developing and running Bioinformatic workflows/pipelines leveraging and managing WDL engines like miniWDL, Slurm and R on AWS cloud utilizing technologies like AWS Parallel Cluster, R Workbench, Batch, ECS, Kubernetes etc. Technical Skills : AWS Services: EC2, RDS, S3, RedShift, FSX, EFS, IAM, Batch, Cloud Formation etc. Python & bash scripting Experience with Slurm (Configuring Slurm partitions, converting standalone jobs to Slurm etc.) Experience with R Workbench & Package manager – Install & configure R Workbench with scalable backend such as Slurm, Kubernetes etc. Experience with Dockers Key Responsibilities : Developing and running Bioinformatic workflows/pipelines leveraging and managing WDL engines like miniWDL, Slurm and R on AWS cloud utilizing technologies like AWS Parallel Cluster, R Workbench, Batch, ECS, Kubernetes etc. Good-to-Have: 1. Should have knowledge on Dockers ECS, RDS, S3 2. Good Knowledge in developing and running Bioinformatic workflows/pipelines leveraging and managing WDL engines like miniWDL, Slurm Kind Regards, Priyankha M

Posted 6 hours ago

Apply

0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

Job Title: Cloud Engineer – AWS, CI/CD & Infrastructure Automation Department: Information Technology / Research Computing Location: Bangalore/Kochi Shift: Need clarity Job Type: Full-Time Reports To: Director of IT Infrastructure / Head of Research Computing Position Summary: DBiz.ai is seeking a dedicated and technically proficient Cloud Engineer to support our growing cloud infrastructure needs across academic, research, and administrative domains. The ideal candidate will have strong experience with AWS core services , CI/CD pipelines using GitHub , and Infrastructure as Code (IaC) to help modernize and automate our cloud environments. Key Responsibilities: Design, implement, and manage AWS-based cloud infrastructure to support academic and research computing needs. Develop and maintain CI/CD pipelines for deploying applications and services using GitHub Actions or similar tools. Automate infrastructure provisioning and configuration using IaC tools such as Terraform or AWS CloudFormation. Design and implement solutions using AWS Machine Learning (SageMaker, Bedrock), data analytics (Redshift), and data processing tools (Glue, Step Functions) to support automation and intelligent decision-making Collaborate with faculty, researchers, and IT staff to support cloud-based research workflows and data pipelines. Ensure cloud environments are secure, scalable, and cost-effective. Monitor system performance and troubleshoot issues related to cloud infrastructure and deployments. Document cloud architecture, workflows, and best practices for internal knowledge sharing and compliance. Required Qualifications: Bachelor’s degree in Computer Science, Information Technology, or a related field. Strong experience with AWS core services (EC2, S3, IAM, VPC, Lambda, CloudWatch, etc.). Proficiency in GitHub and building CI/CD pipelines . Hands-on experience with Infrastructure as Code tools (Terraform, CloudFormation, etc.). Familiarity with scripting languages (e.g., Python, Bash). Exposure to AWS Machine Learning services (e.g., SageMaker, Bedrock), Data Analytics tools (e.g., Redshift), and Data Processing and Orchestration services (e.g., Glue, Step Functions). Strong understanding of cloud security, networking, and architecture principles. Excellent communication and collaboration skills, especially in academic or research settings. Preferred Qualifications: AWS Certification (e.g., AWS Certified Solutions Architect – Associate). Experience supporting research computing environments or academic IT infrastructure . Familiarity with containerization (Docker, Kubernetes) and hybrid cloud environments. Experience working in a university or public sector environment.

Posted 6 hours ago

Apply

0 years

0 Lacs

India

On-site

Linkedin logo

Role Summary: We are looking for experiencedData Modelers to support large-scale data engineering and analytics initiatives. The role involves developing logical and physical data models, working closely with business and engineering teams to define data requirements, and ensuring alignment with enterprise standards. Independently complete conceptual, logical and physical data models for any supported platform, including SQL Data Warehouse, Spark, Data Bricks Delta Lakehouse or other Cloud data warehousing technologies. Governs data design/modelling – documentation of metadata (business definitions of entities and attributes) and constructions database objects, for baseline and investment funded projects, as assigned. Develop a deep understanding of the business domains like Customer, Sales, Finance, Supplier, and enterprise technology inventory to craft a solution roadmap that achieves business objectives, maximizes reuse. Drive collaborative reviews of data model design, code, data, security features to drive data product development. Show expertise for data at all levels: low-latency, relational, and unstructured data stores; analytical and data lakes; SAP Data Model. Develop reusable data models based on cloud-centric, code-first approaches to data management and data mapping. Partner with the data stewards team for data discovery and action by business customers and stakeholders. Provides and/or supports data analysis, requirements gathering, solution development, and design reviews for enhancements to, or new, applications/reporting. Assist with data planning, sourcing, collection, profiling, and transformation. Support data lineage and mapping of source system data to canonical data stores. Create Source to Target Mappings (STTM) for ETL and BI developers. Skills needed: Expertise in data modelling tools (ER/Studio, Erwin, IDM/ARDM models, CPG / Manufacturing/Sales/Finance/Supplier/Customer domains ) Experience with at least one MPP database technology such as Databricks Lakehouse, Redshift, Synapse, Teradata, or Snowflake. Experience with version control systems like GitHub and deployment & CI tools. Experience of metadata management, data lineage, and data glossaries is a plus. Working knowledge of agile development, including DevOps and DataOps concepts. Working knowledge of SAP data models, particularly in the context of HANA and S/4HANA, Retails Data like IRI, Nielsen Retail.

Posted 7 hours ago

Apply

Exploring Redshift Jobs in India

The job market for redshift professionals in India is growing rapidly as more companies adopt cloud data warehousing solutions. Redshift, a powerful data warehouse service provided by Amazon Web Services, is in high demand due to its scalability, performance, and cost-effectiveness. Job seekers with expertise in redshift can find a plethora of opportunities in various industries across the country.

Top Hiring Locations in India

  1. Bangalore
  2. Hyderabad
  3. Mumbai
  4. Pune
  5. Chennai

Average Salary Range

The average salary range for redshift professionals in India varies based on experience and location. Entry-level positions can expect a salary in the range of INR 6-10 lakhs per annum, while experienced professionals can earn upwards of INR 20 lakhs per annum.

Career Path

In the field of redshift, a typical career path may include roles such as: - Junior Developer - Data Engineer - Senior Data Engineer - Tech Lead - Data Architect

Related Skills

Apart from expertise in redshift, proficiency in the following skills can be beneficial: - SQL - ETL Tools - Data Modeling - Cloud Computing (AWS) - Python/R Programming

Interview Questions

  • What is Amazon Redshift and how does it differ from traditional databases? (basic)
  • How does data distribution work in Amazon Redshift? (medium)
  • Explain the difference between SORTKEY and DISTKEY in Redshift. (medium)
  • How do you optimize query performance in Amazon Redshift? (advanced)
  • What is the COPY command in Redshift used for? (basic)
  • How do you handle large data sets in Redshift? (medium)
  • Explain the concept of Redshift Spectrum. (advanced)
  • What is the difference between Redshift and Redshift Spectrum? (medium)
  • How do you monitor and manage Redshift clusters? (advanced)
  • Can you describe the architecture of Amazon Redshift? (medium)
  • What are the best practices for data loading in Redshift? (medium)
  • How do you handle concurrency in Redshift? (advanced)
  • Explain the concept of vacuuming in Redshift. (basic)
  • What are Redshift's limitations and how do you work around them? (advanced)
  • How do you scale Redshift clusters for performance? (medium)
  • What are the different node types available in Amazon Redshift? (basic)
  • How do you secure data in Amazon Redshift? (medium)
  • Explain the concept of Redshift Workload Management (WLM). (advanced)
  • What are the benefits of using Redshift over traditional data warehouses? (basic)
  • How do you optimize storage in Amazon Redshift? (medium)
  • What is the difference between Redshift and Redshift Spectrum? (medium)
  • How do you troubleshoot performance issues in Amazon Redshift? (advanced)
  • Can you explain the concept of columnar storage in Redshift? (basic)
  • How do you automate tasks in Redshift? (medium)
  • What are the different types of Redshift nodes and their use cases? (basic)

Conclusion

As the demand for redshift professionals continues to rise in India, job seekers should focus on honing their skills and knowledge in this area to stay competitive in the job market. By preparing thoroughly and showcasing their expertise, candidates can secure rewarding opportunities in this fast-growing field. Good luck with your job search!

cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies