Home
Jobs

2214 Redshift Jobs - Page 6

Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
Filter
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

10.0 years

0 Lacs

India

Remote

Linkedin logo

Join phData, a dynamic and innovative leader in the modern data stack. We partner with major cloud data platforms like Snowflake, AWS, Azure, GCP, Fivetran, Pinecone, Glean and dbt to deliver cutting-edge services and solutions. We're committed to helping global enterprises overcome their toughest data challenges. phData is a remote-first global company with employees based in the United States, Latin America and India. We celebrate the culture of each of our team members and foster a community of technological curiosity, ownership and trust. Even though we're growing extremely fast, we maintain a casual, exciting work environment. We hire top performers and allow you the autonomy to deliver results. 5x Snowflake Partner of the Year (2020, 2021, 2022, 2023, 2024) Fivetran, dbt, Atlation, Matillion Partner of the Year #1 Partner in Snowflake Advanced Certifications 600+ Expert Cloud Certifications (Sigma, AWS, Azure, Dataiku, etc) Recognized as an award-winning workplace in US, India and LATAM About You: You are a strategic thinker with a passion for building scalable, cloud-native solutions. With a strong technical background, you excel in driving data platform implementations and managing complex projects. Your expertise in platforms like Snowflake, AWS, and Azure is complemented by a deep understanding of how to align data infrastructure with strategic business goals. You have a proven track record in professional consulting, building scalable, secure solutions that optimize data platform performance. You thrive in environments that challenge you to think critically, lead technical teams, and optimize systems for continuous improvement. As a part of our Managed Services team, you are driven by a commitment to customer success and long-term growth. You embrace best practices, champion effective platform management, and contribute to the evolution of data platforms by promoting phData’s Operational Maturity Framework. Your approach is hands-on, collaborative, and results-oriented, making you a key player in ensuring clients' ongoing success. Key Responsibilities: Technical Leadership: Lead the design, architecture, and implementation of large-scale data platform projects on Snowflake, AWS, and Azure. Guide technical teams through data migration, integration, and performance optimization. Platform Optimization, Integration and Automation: Identify opportunities for automation and performance optimization to enhance client’s data platform capabilities. Lead data migration to cloud platforms (e.g., Snowflake, Redshift), ensuring smooth integration of data lakes, data warehouses, and distributed systems. Platform Security: Setting up a client's data platform with industry best practices & robust security standards including Data Governance. Process Engineering: Adapt to ongoing changes in platform environment, data, and technology by debugging, enhancing features, and re-engineering as needed. Plan ahead for changes to upstream data sources to minimize impact on users and systems, ensuring scalable adoption of modern data platforms. Consulting Leadership: Manage multiple customer engagements to ensure timely project delivery, optimize operations to prevent resource overruns, and maximize platform ROI. Serve as a trusted advisor, offering strategic guidance on data platform optimization and addressing technical challenges to meet business goals. Cross-functional Collaboration & Mentorship: Partner with sales, engineering, and support teams to ensure seamless project delivery and high customer satisfaction. Provide mentorship and technical guidance to junior engineers, promoting a culture of continuous learning and excellence. Key Skills and Experience: Required: Solutions Architecture: 10 years of hands-on experience of architecting, designing, implementing, and managing cloud-native data platforms and solutions. Client-Facing Skills: Strong communication skills, with experience presenting to executive stakeholders and creating detailed solution documentation. Data Platforms: Extensive experience managing enterprise data platforms (e.g., Snowflake, Redshift, Azure Data Warehouse) with strong skills in performance tuning and troubleshooting. Cloud Expertise: Deep knowledge of AWS, Azure, and Snowflake ecosystems, including services like S3, ADLS, Kinesis, Data Factory, DBT, and Kafka. IT Operations: Build and manage scalable, secure data platforms aligned with strategic goals. Excel at optimizing systems and driving continuous improvement in platform performance. SQL Mastery: Advanced proficiency in Microsoft SQL, including writing, debugging, and optimizing queries. DevOps and Infrastructure: Proficiency in infrastructure-as-code (IaC) tools like Terraform or CloudFormation, and experience with CI/CD pipelines (e.g., Bitbucket, GitHub). Data Integration Tools: Expertise with tools such as AWS Data Migration Services, Azure Data Factory, Matillion, Fivetran, or Spark. Preferred: Certifications: Snowflake SnowPro Core certification or equivalent. Python Proficiency: Experience using Python for task automation and operational efficiency. CI/CD Expertise: Hands-on experience with automated deployment frameworks, such as Flyway or Liquibase. Education: A Bachelor’s degree in Computer Science, Engineering, or a related field is highly preferred Advanced degrees or equivalent certifications are preferred. Why phData? We offer: Remote-First Workplace Medical Insurance for Self & Family Medical Insurance for Parents Term Life & Personal Accident Wellness Allowance Broadband Reimbursement 2-4 week bootcamp and provide continuous learning opportunities to enhance your skills and expertise Other perks include paid certifications, professional development allowance and additional compensation for creating company-approved content (dashboards, blogs, videos, whitepapers, etc.) phData celebrates diversity and is committed to creating an inclusive environment for all employees. Our approach helps us to build a winning team that represents a variety of backgrounds, perspectives, and abilities. So, regardless of how your diversity expresses itself, you can find a home here at phData. We are proud to be an equal opportunity employer. We prohibit discrimination and harassment of any kind based on race, color, religion, national origin, sex (including pregnancy), sexual orientation, gender identity, gender expression, age, veteran status, genetic information, disability, or other applicable legally protected characteristics. If you would like to request an accommodation due to a disability, please contact us at People Operations.

Posted 3 days ago

Apply

5.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Sonatype is the software supply chain security company. We provide the world’s best end-to-end software supply chain security solution, combining the only proactive protection against malicious open source, the only enterprise grade SBOM management and the leading open source dependency management platform. This empowers enterprises to create and maintain secure, quality, and innovative software at scale. As founders of Nexus Repository and stewards of Maven Central, the world’s largest repository of Java open-source software, we are software pioneers and our open source expertise is unmatched. We empower innovation with an unparalleled commitment to build faster, safer software and harness AI and data intelligence to mitigate risk, maximize efficiencies, and drive powerful software development. More than 2,000 organizations, including 70% of the Fortune 100 and 15 million software developers, rely on Sonatype to optimize their software supply chains. The Opportunity We’re looking for a Senior Data Engineer to join our growing Data Platform team. You’ll play a key role in designing and scaling the infrastructure and pipelines that power analytics, machine learning, and business intelligence across Sonatype. You’ll work closely with stakeholders across product, engineering, and business teams to ensure data is reliable, accessible, and actionable. This role is ideal for someone who thrives on solving complex data challenges at scale and enjoys building high-quality, maintainable systems. What You’ll Do Design, build, and maintain scalable data pipelines and ETL/ELT processes Architect and optimize data models and storage solutions for analytics and operational use Collaborate with data scientists, analysts, and engineers to deliver trusted, high-quality datasets Own and evolve parts of our data platform (e.g., Airflow, dbt, Spark, Redshift, or Snowflake) Implement observability, alerting, and data quality monitoring for critical pipelines Drive best practices in data engineering, including documentation, testing, and CI/CD Contribute to the design and evolution of our next-generation data lakehouse architecture Minimum Qualifications 5+ years of experience as a Data Engineer or in a similar backend engineering role Strong programming skills in Python, Scala, or Java Hands-on experience with HBase or similar NoSQL columnar stores Hands-on experience with distributed data systems like Spark, Kafka, or Flink Proficient in writing complex SQL and optimizing queries for performance Experience building and maintaining robust ETL/ELT (Data Warehousing) pipelines in production Familiarity with workflow orchestration tools (Airflow, Dagster, or similar) Understanding of data modeling techniques (star schema, dimensional modeling, etc.) Bonus Points Experience working with Databricks, dbt, Terraform, or Kubernetes Familiarity with streaming data pipelines or real-time processing Exposure to data governance frameworks and tools Experience supporting data products or ML pipelines in production Strong understanding of data privacy, security, and compliance best practices Why You’ll Love Working Here Data with purpose: Work on problems that directly impact how the world builds secure software Modern tooling: Leverage the best of open-source and cloud-native technologies Collaborative culture: Join a passionate team that values learning, autonomy, and impact At Sonatype, we value diversity and inclusivity. We offer perks such as parental leave, diversity and inclusion working groups, and flexible working practices to allow our employees to show up as their whole selves. We are an equal-opportunity employer, and we do not discriminate on the basis of race, religion, color, national origin, gender, sexual orientation, age, marital status, veteran status, or disability status. If you have a disability or special need that requires accommodation, please do not hesitate to let us know.

Posted 3 days ago

Apply

5.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Sonatype is the software supply chain security company. We provide the world’s best end-to-end software supply chain security solution, combining the only proactive protection against malicious open source, the only enterprise grade SBOM management and the leading open source dependency management platform. This empowers enterprises to create and maintain secure, quality, and innovative software at scale. As founders of Nexus Repository and stewards of Maven Central, the world’s largest repository of Java open-source software, we are software pioneers and our open source expertise is unmatched. We empower innovation with an unparalleled commitment to build faster, safer software and harness AI and data intelligence to mitigate risk, maximize efficiencies, and drive powerful software development. More than 2,000 organizations, including 70% of the Fortune 100 and 15 million software developers, rely on Sonatype to optimize their software supply chains. The Opportunity We’re looking for a Senior Data Engineer to join our growing Data Platform team. This role is a hybrid of data engineering and business intelligence, ideal for someone who enjoys solving complex data challenges while also building intuitive and actionable reporting solutions. You’ll play a key role in designing and scaling the infrastructure and pipelines that power analytics, dashboards, machine learning, and decision-making across Sonatype. You’ll also be responsible for delivering clear, compelling, and insightful business intelligence through tools like Looker Studio and advanced SQL queries. What You’ll Do Design, build, and maintain scalable data pipelines and ETL/ELT processes. Architect and optimize data models and storage solutions for analytics and operational use. Create and manage business intelligence reports and dashboards using tools like Looker Studio, Power BI, or similar. Collaborate with data scientists, analysts, and stakeholders to ensure datasets are reliable, meaningful, and actionable. Own and evolve parts of our data platform (e.g., Airflow, dbt, Spark, Redshift, or Snowflake). Write complex, high-performance SQL queries to support reporting and analytics needs. Implement observability, alerting, and data quality monitoring for critical pipelines. Drive best practices in data engineering and business intelligence, including documentation, testing, and CI/CD. Contribute to the evolution of our next-generation data lakehouse and BI architecture. What We’re Looking For 5+ years of experience as a Data Engineer or in a hybrid data/reporting role. Strong programming skills in Python, Java, or Scala. Proficiency with data tools such as Databricks, data modeling techniques (e.g., star schema, dimensional modeling), and data warehousing solutions like Snowflake or Redshift. Hands-on experience with modern data platforms and orchestration tools (e.g., Spark, Kafka, Airflow). Proficient in SQL with experience in writing and optimizing complex queries for BI and analytics. Experience with BI tools such as Looker Studio, Power BI, or Tableau. Experience in building and maintaining robust ETL/ELT pipelines in production. Understanding of data quality, observability, and governance best practices. 5+ years of experience as a Data Engineer or in a hybrid data/reporting role. Strong programming skills in Python, Java, or Scala. Hands-on experience with modern data platforms and orchestration tools (e.g., Spark, Kafka, Airflow). Proficient in SQL with experience in writing and optimizing complex queries for BI and analytics. Experience with BI tools such as Looker Studio, Power BI, or Tableau. Familiarity with data modeling techniques (star schema, dimensional modeling, etc.). Experience in building and maintaining robust ETL/ELT pipelines in production. Understanding data quality, observability, and governance best practices. Bonus Points Experience with dbt, Terraform, or Kubernetes. Familiarity with real-time data processing or streaming architectures. Understanding of data privacy, compliance, and security best practices in analytics and reporting. Why You’ll Love Working Here Data with purpose: Work on problems that directly impact how the world builds secure software. Full-spectrum impact: Use both engineering and analytical skills to shape product, strategy, and operations. Modern tooling: Leverage the best of open-source and cloud-native technologies. Collaborative culture: Join a passionate team that values learning, autonomy, and real-world impact. At Sonatype, we value diversity and inclusivity. We offer perks such as parental leave, diversity and inclusion working groups, and flexible working practices to allow our employees to show up as their whole selves. We are an equal-opportunity employer, and we do not discriminate on the basis of race, religion, color, national origin, gender, sexual orientation, age, marital status, veteran status, or disability status. If you have a disability or special need that requires accommodation, please do not hesitate to let us know.

Posted 3 days ago

Apply

6.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Sonatype is the software supply chain security company. We provide the world’s best end-to-end software supply chain security solution, combining the only proactive protection against malicious open source, the only enterprise grade SBOM management and the leading open source dependency management platform. This empowers enterprises to create and maintain secure, quality, and innovative software at scale. As founders of Nexus Repository and stewards of Maven Central, the world’s largest repository of Java open-source software, we are software pioneers and our open source expertise is unmatched. We empower innovation with an unparalleled commitment to build faster, safer software and harness AI and data intelligence to mitigate risk, maximize efficiencies, and drive powerful software development. More than 2,000 organizations, including 70% of the Fortune 100 and 15 million software developers, rely on Sonatype to optimize their software supply chains. About The Role The Engineering Manager – Data role at Sonatype blends hands-on data engineering with leadership and strategic influence. You will lead high-performing data engineering teams to build the infrastructure, pipelines, and systems that fuel analytics, business intelligence, and machine learning across our global products. We’re looking for a leader who brings deep technical experience in modern data platforms, is fluent in programming, and understands the nuances of open-source consumption and software supply chain security. This hybrid role is based out of our Hyderabad office. What You’ll Do Lead, mentor, and grow a team of data engineers responsible for building scalable, secure, and maintainable data solutions. Design and architect data pipelines, Lakehouse systems, and warehouse models using tools such as Databricks, Airflow, Spark, and Snowflake/Redshift. Stay hands-on—write, review, and guide production-level code in Python, Java, or similar languages. Ensure strong foundations in data modeling, governance, observability, and data quality. Collaborate with cross-functional teams including Product, Security, Engineering, and Data Science to translate business needs into data strategies and deliverables. Apply your knowledge of open-source component usage, dependency management, and software composition analysis to ensure our data platforms support secure development practices. Embed application security principles into data platform design, supporting Sonatype’s mission to secure the software supply chain. Foster an engineering culture that prioritizes continuous improvement, technical excellence, and team ownership. Who You Are A technical leader with a strong background in data engineering, platform design, and secure software development. Comfortable operating across domains—data infrastructure, programming, architecture, security, and team leadership. Passionate about delivering high-impact results through technical contributions, mentoring, and strategic thinking. Familiar with modern data engineering practices, open-source ecosystems, and the challenges of managing data securely on a scale. A collaborative communicator who thrives in hybrid and cross-functional team environments. What You Need 6+ years of experience in data engineering, backend systems, or infrastructure development. 2+ year of experience in a technical leadership or engineering management role with hands-on contribution. Expertise in data technologies: Databricks, Spark, Airflow, Snowflake/Redshift, dbt, etc. Strong programming skills in Python, Java, or Scala with experience building robust, production-grade systems. Experience in data modeling (dimensional modeling, star/snowflake schema), data warehousing, and ELT/ETL pipeline development. Understanding software dependency management and open-source consumption patterns. Familiarity with application security principles and a strong interest in secure software supply chains. Experience supporting real-time data systems or streaming architectures. Exposure to machine learning pipelines or data productization. Experience with tools like Terraform, Kubernetes, and CI/CD for data engineering workflows. Knowledge of data governance frameworks and regulatory compliance (GDPR, SOC2, etc.). Why Join Us? Help secure the software supply chain for millions of developers worldwide. Build meaningful software in a collaborative, fast-moving environment with strong technical peers. Stay hands-on while leading—technical leadership is part of the job, not separate from it. Join a global engineering organization with deep local roots and a strong team culture. Competitive salary, great benefits, and opportunities for growth and innovation. At Sonatype, we value diversity and inclusivity. We offer perks such as parental leave, diversity and inclusion working groups, and flexible working practices to allow our employees to show up as their whole selves. We are an equal-opportunity employer, and we do not discriminate on the basis of race, religion, color, national origin, gender, sexual orientation, age, marital status, veteran status, or disability status. If you have a disability or special need that requires accommodation, please do not hesitate to let us know.

Posted 3 days ago

Apply

4.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

The Role The Data Engineer is accountable for developing high quality data products to support the Bank’s regulatory requirements and data driven decision making. A Mantas Scenario Developer will serve as an example to other team members, work closely with customers, and remove or escalate roadblocks. By applying their knowledge of data architecture standards, data warehousing, data structures, and business intelligence they will contribute to business outcomes on an agile team. Responsibilities Developing and supporting scalable, extensible, and highly available data solutions Deliver on critical business priorities while ensuring alignment with the wider architectural vision Identify and help address potential risks in the data supply chain Follow and contribute to technical standards Design and develop analytical data models Required Qualifications & Work Experience First Class Degree in Engineering/Technology (4-year graduate course) 3 to 4 years’ experience implementing data-intensive solutions using agile methodologies Experience of relational databases and using SQL for data querying, transformation and manipulation Experience of modelling data for analytical consumers Hands on Mantas (Oracle FCCM) Scenario Development experience throughout the full development life cycle Ability to automate and streamline the build, test and deployment of data pipelines Experience in cloud native technologies and patterns A passion for learning new technologies, and a desire for personal growth, through self-study, formal classes, or on-the-job training Excellent communication and problem-solving skills T echnical Skills (Must Have) ETL: Hands on experience of building data pipelines. Proficiency in at least one of the data integration platforms such as Ab Initio, Apache Spark, Talend and Informatica Mantas: Expert in Oracle Mantas/FCCM, Scenario Manager, Scenario Development, thorough knowledge and hands on experience in Mantas FSDM, DIS, Batch Scenario Manager Big Data: Exposure to ‘big data’ platforms such as Hadoop, Hive or Snowflake for data storage and processing Data Warehousing & Database Management: Understanding of Data Warehousing concepts, Relational (Oracle, MSSQL, MySQL) and NoSQL (MongoDB, DynamoDB) database design Data Modeling & Design: Good exposure to data modeling techniques; design, optimization and maintenance of data models and data structures Languages: Proficient in one or more programming languages commonly used in data engineering such as Python, Java or Scala DevOps: Exposure to concepts and enablers - CI/CD platforms, version control, automated quality control management Technical Skills (Valuable) Ab Initio: Experience developing Co>Op graphs; ability to tune for performance. Demonstrable knowledge across full suite of Ab Initio toolsets e.g., GDE, Express>IT, Data Profiler and Conduct>IT, Control>Center, Continuous>Flows Cloud: Good exposure to public cloud data platforms such as S3, Snowflake, Redshift, Databricks, BigQuery, etc. Demonstrable understanding of underlying architectures and trade-offs Data Quality & Controls: Exposure to data validation, cleansing, enrichment and data controls Containerization: Fair understanding of containerization platforms like Docker, Kubernetes File Formats: Exposure in working on Event/File/Table Formats such as Avro, Parquet, Iceberg, Delta Others: Basics of Job scheduler like Autosys. Basics of Entitlement management Certification on any of the above topics would be an advantage. ------------------------------------------------------ Job Family Group: Technology ------------------------------------------------------ Job Family: Digital Software Engineering ------------------------------------------------------ Time Type: Full time ------------------------------------------------------ Most Relevant Skills Please see the requirements listed above. ------------------------------------------------------ Other Relevant Skills For complementary skills, please see above and/or contact the recruiter. ------------------------------------------------------ Citi is an equal opportunity employer, and qualified candidates will receive consideration without regard to their race, color, religion, sex, sexual orientation, gender identity, national origin, disability, status as a protected veteran, or any other characteristic protected by law. If you are a person with a disability and need a reasonable accommodation to use our search tools and/or apply for a career opportunity review Accessibility at Citi. View Citi’s EEO Policy Statement and the Know Your Rights poster.

Posted 3 days ago

Apply

0 years

0 Lacs

India

Remote

Linkedin logo

We Breathe Life Into Data At Komodo Health, our mission is to reduce the global burden of disease. And we believe that smarter use of data is essential to this mission. That’s why we built the Healthcare Map — the industry’s largest, most complete, precise view of the U.S. healthcare system — by combining de-identified, real-world patient data with innovative algorithms and decades of clinical experience. The Healthcare Map serves as our foundation for a powerful suite of software applications, helping us answer healthcare’s most complex questions for our partners. Across the healthcare ecosystem, we’re helping our clients unlock critical insights to track detailed patient behaviors and treatment patterns, identify gaps in care, address unmet patient needs, and reduce the global burden of disease. As we pursue these goals, it remains essential to us that we stay grounded in our values: be awesome, seek growth, deliver “wow,” and enjoy the ride. At Komodo, you will be joining a team of ambitious, supportive Dragons with diverse backgrounds but a shared passion to deliver on our mission to reduce the burden of disease — and enjoy the journey along the way. The Opportunity at Komodo Health Komodo aims to build the best healthcare data architecture in the industry. We are rapidly growing and expanding our infrastructure stack and adopting SRE best practices. You will be joining the Sentinel team. We build, operate, and constantly improve our infrastructure offerings for 150+ client organizations. The team achieves our business goals by adopting best infrastructure practices and leveraging 100% automation to scale our product offerings and reduce costs. You will work with passionate team members across the country, and to accomplish our goal: reduce the burden of disease. Looking back on your first 12 months at Komodo Health, you will have… Have developed a deep understanding of Sentinel and Sentinel’s customers Have built and operated cloud infrastructure (AWS) based on customer requirements Have solved any development and deployment challenges around making Sentinel’s infrastructure highly reliable, easy to maintain, and cost effective Have participated in the development, execution, and support of the new feature rollout with solution architects and customer success teams Have developed and contributed to existing and new monitoring and alerting systems for Sentinel infrastructure Have hardened infrastructure security including network, storage, user access etc. Have responded and solved key customer reported issues in a timely manner Participated in an on-call rotation What You Bring To Komodo Health Excited about automation, building scalable technical solutions, and being a team player Proficiency in at least one of the mainstream programming languages such as Python and/or Java, with deep technical troubleshooting skills Proficiency in Terraform, Scalr, and or other similar tools Experience with AWS’s core services and their relationships; ability to create solutions based on users’ high-level descriptions, learn new cloud technologies and use them as needed, etc. Hands-on experience in building CI/CD pipelines using GitActions, Jenkins, etc. Hands-on experience in networking, subnets, CIDR, etc. that are applicable to deploying applications and making sure they are accessible to our users across the globe Hands-on experience with scripting (bash and/or power shell) Knowledge of operating system basics (Linux and/or Windows) Knowledge of main cloud vendors and big-data tools and frameworks like Snowflake, Airflow, and/or Spark Working knowledge of data modeling, schema design, data storage with relational (e.g. PostgreSQL), NoSQL(e.g. DynamoDB, Redis), MPP databases (e.g. Snowflake, Redshift, BigQuery) Excellent cross-team communication and collaboration skills, with the ability to initiate and effectively drive projects proactively Additional skills and experience we’d prioritize (nice to have)… Experience with AWS preferred. AWS Cloud Infrastructure certification is a plus. Backend development experience such as building APIs and micro services using Python, Java, or any other mainstream programming languages Experience with data privacy concerns such as HIPAA or GDPR Experience working with cross-functional teams and with other customer-facing teams Passion! We hope you are passionate about our mission and technology Ownership! We hope you own your work, be accountable, and push it through the finish line. We hope you treat yourself as a cofounder and do not hesitate to share any idea that helps Komodo Expertise! We do not need you to know everything, but we hope you have deep knowledge in at least one area and can start contributing quickly. And we would love to learn from you in your area(s) of expertise as well Komodo's AI Standard At Komodo, we're not just witnessing the AI revolution – we're leading it. This is a pivotal moment in time, where being first to market with AI transforms industries and sets the bar. We've already established industry leadership in leveraging AI to revolutionize healthcare, and we expect every team member to contribute. AI here isn't optional; it's foundational. We expect you to integrate AI into your daily work – from summarizing documents to automating workflows and uncovering insights. This isn't just about efficiency; it's about making every moment more meaningful, building on trust in AI, and driving our collective success. Join us in shaping the future of healthcare intelligence. Where You’ll Work Komodo Health has a hybrid work model; we recognize the power of choice and importance of flexibility for the well-being of both our company and our individual Dragons. Roles may be completely remote based anywhere in the country listed, remote but based in a specific region, or local (commuting distance) to one of our hubs in San Francisco, New York City, or Chicago with remote work options. What We Offer Positions may be eligible for company benefits in accordance with Company policy. We offer a competitive total rewards package including medical, dental and vision coverage along with a broad range of supplemental benefits including 401k Retirement Plan, prepaid legal assistance, and more. We also offer paid time off for vacation, sickness, holiday, and bereavement. We are pleased to be able to provide 100% company-paid life insurance and long-term disability insurance. This information is intended to be a general overview and may be modified by the Company due to business-related factors. Equal Opportunity Statement Komodo Health provides equal employment opportunities to all applicants and employees. We prohibit discrimination and harassment of any type with regard to race, color, religion, age, sex, national origin, disability status, genetics, protected veteran status, sexual orientation, gender identity or expression, or any other characteristic protected by federal, state, or local laws.

Posted 3 days ago

Apply

6.0 - 9.0 years

0 Lacs

India

Remote

Linkedin logo

About Juniper Square Our mission is to unlock the full potential of private markets. Privately owned assets like commercial real estate, private equity, and venture capital make up half of our financial ecosystem yet remain inaccessible to most people. We are digitizing these markets, and as a result, bringing efficiency, transparency, and access to one of the most productive corners of our financial ecosystem. If you care about making the world a better place by making markets work better through technology – all while contributing as a member of a values-driven organization – we want to hear from you. Juniper Square offers employees a variety of ways to work, ranging from a fully remote experience to working full-time in one of our physical offices. We invest heavily in digital-first operations, allowing our teams to collaborate effectively across 27 U.S. states, 2 Canadian Provinces, India, Luxembourg, and England. We also have a physical offices in San Francisco, New York City, Mumbai and Bangalore for employees who prefer to work in an office some or all of the time. What You’ll Do Design and architect complex systems with the team, actively participating in design reviews. Lead and mentor a team of junior developers, fostering their growth and development. Ensure high quality in team deliverables through guidance, code reviews, and setting best practices. Collaborate with cross-functional partners (Product, UX, QA) to ensure the team meets project timelines. Own monitoring, diagnosing, and resolving production issues within BizOps systems. Contribute to large-scale, complex projects, and execute development tasks through completion. Perform code reviews to uphold high quality and standards across codebases. Provide technical support for stakeholder groups, including Customer Success. Work closely with QA to maintain software quality and increase automation coverage. Qualifications Bachelor's degree in Computer Science or equivalent work experience 6 to 9 years of experience building cloud-based web applications. Previous experience of leading a team will be a plus. Expertise in object-oriented programming (OOP) such as Python, Java or similar server-side web application development languages. Experience with front end technologies like React, CSS frameworks, HTML and Javascript Experience with SQL database schema design and query optimization Experience with Cloud technologies (AWS preferred) and Container technologies (Docker and k8s) Experience with Data warehousing technologies like RedShift or knowledge of time series DB is a plus. Experience with GraphQL, Apollo Server, and NestJS is a plus but not required You must be flexible and adaptable—you will be operating in a fast-paced startup environment At Juniper Square, we believe building a diverse workforce and an inclusive culture makes us a better company. If you think this job sounds like a fit, we encourage you to apply even if you don’t meet all the qualifications.

Posted 3 days ago

Apply

7.0 years

0 Lacs

India

On-site

Linkedin logo

At 3Pillar, our focus is on leveraging cutting-edge technologies that revolutionize industries by enabling data driven decision making. As a Senior Data Engineer, you will hold a crucial position within our dynamic team, actively contributing to thrilling projects that reshape data analytics for our clients, providing them with a competitive advantage in their respective Industries. if your passion for data analytics solutions that make a real-world impact, consider this your pass to the captivating world of Data Science and Engineering! 🔮🌐 Minimum Qualification Total IT Exp should be 7+ Years. Demonstrated expertise with a minimum of 5+ years of experience as data engineer or similar role Advanced SQL skills and experience with relational databases and database design. Experience working with cloud Data Warehouse solutions (e.g., Snowflake, Redshift, BigQuery, Azure Synapse, etc.). Strong Python skills with hands-on experience on Pandas, NumPy and other data related libraries Experience with Big Data technologies like Spark, Map Reduce, Hadoop, Hive etc Proficient in data pipeline and workflow management tools e.g. Airflow Experience with the AWS data engineering services Knowledge of AWS services viz., S3, Lambda, EMR, GLUE ETL, Athena, RDS, Redshift, EC2, IAM, AWS Kinesis Very good exposure of working on Data Lakes & Data Warehouses solutions Excellent problem-solving, communication, and organizational skills. Proven ability to work independently and with a team. Ability to guide other data engineers.

Posted 3 days ago

Apply

10.0 years

15 - 17 Lacs

India

Remote

Linkedin logo

Note: This is a remote role with occasional office visits. Candidates from Mumbai or Pune will be preferred About The Company A fast-growing enterprise technology consultancy operating at the intersection of cloud computing, big-data engineering and advanced analytics . The team builds high-throughput, real-time data platforms that power AI, BI and digital products for Fortune 500 clients across finance, retail and healthcare. By combining Databricks Lakehouse architecture with modern DevOps practices, they unlock insight at petabyte scale while meeting stringent security and performance SLAs. Role & Responsibilities Architect end-to-end data pipelines (ingestion → transformation → consumption) using Databricks, Spark and cloud object storage. Design scalable data warehouses/marts that enable self-service analytics and ML workloads. Translate logical data models into physical schemas; own database design, partitioning and lifecycle management for cost-efficient performance. Implement, automate and monitor ETL/ELT workflows, ensuring reliability, observability and robust error handling. Tune Spark jobs and SQL queries, optimizing cluster configurations and indexing strategies to achieve sub-second response times. Provide production support and continuous improvement for existing data assets, championing best practices and mentoring peers. Skills & Qualifications Must-Have 6–10 years building production-grade data platforms, including 3 years+ hands-on Apache Spark/Databricks experience. Expert proficiency in PySpark, Python and advanced SQL, with a track record of performance-tuning distributed jobs. Demonstrated ability to model data warehouses/marts and orchestrate ETL/ELT pipelines with tools such as Airflow or dbt. Hands-on with at least one major cloud platform (AWS or Azure) and modern lakehouse / data-lake patterns. Strong problem-solving skills, DevOps mindset and commitment to code quality; comfortable mentoring fellow engineers. Preferred Deep familiarity with the AWS analytics stack (Redshift, Glue, S3) or the broader Hadoop ecosystem. Bachelor’s or Master’s degree in Computer Science, Engineering or a related field. Experience building streaming pipelines (Kafka, Kinesis, Delta Live Tables) and real-time analytics solutions. Exposure to ML feature stores, MLOps workflows and data-governance/compliance frameworks. Relevant professional certifications (Databricks, AWS, Azure) or notable open-source contributions. Benefits & Culture Highlights Remote-first & flexible hours with 25+ PTO days and comprehensive health cover. Annual training budget & certification sponsorship (Databricks, AWS, Azure) to fuel continuous learning. Inclusive, impact-focused culture where engineers shape the technical roadmap and mentor a vibrant data community Skills: data modeling,big data technologies,team leadership,agile methodologies,performance tuning,data,aws,airflow

Posted 3 days ago

Apply

10.0 years

15 - 17 Lacs

India

Remote

Linkedin logo

Note: This is a remote role with occasional office visits. Candidates from Mumbai or Pune will be preferred About The Company A fast-growing enterprise technology consultancy operating at the intersection of cloud computing, big-data engineering and advanced analytics . The team builds high-throughput, real-time data platforms that power AI, BI and digital products for Fortune 500 clients across finance, retail and healthcare. By combining Databricks Lakehouse architecture with modern DevOps practices, they unlock insight at petabyte scale while meeting stringent security and performance SLAs. Role & Responsibilities Architect end-to-end data pipelines (ingestion → transformation → consumption) using Databricks, Spark and cloud object storage. Design scalable data warehouses/marts that enable self-service analytics and ML workloads. Translate logical data models into physical schemas; own database design, partitioning and lifecycle management for cost-efficient performance. Implement, automate and monitor ETL/ELT workflows, ensuring reliability, observability and robust error handling. Tune Spark jobs and SQL queries, optimizing cluster configurations and indexing strategies to achieve sub-second response times. Provide production support and continuous improvement for existing data assets, championing best practices and mentoring peers. Skills & Qualifications Must-Have 6–10 years building production-grade data platforms, including 3 years+ hands-on Apache Spark/Databricks experience. Expert proficiency in PySpark, Python and advanced SQL, with a track record of performance-tuning distributed jobs. Demonstrated ability to model data warehouses/marts and orchestrate ETL/ELT pipelines with tools such as Airflow or dbt. Hands-on with at least one major cloud platform (AWS or Azure) and modern lakehouse / data-lake patterns. Strong problem-solving skills, DevOps mindset and commitment to code quality; comfortable mentoring fellow engineers. Preferred Deep familiarity with the AWS analytics stack (Redshift, Glue, S3) or the broader Hadoop ecosystem. Bachelor’s or Master’s degree in Computer Science, Engineering or a related field. Experience building streaming pipelines (Kafka, Kinesis, Delta Live Tables) and real-time analytics solutions. Exposure to ML feature stores, MLOps workflows and data-governance/compliance frameworks. Relevant professional certifications (Databricks, AWS, Azure) or notable open-source contributions. Benefits & Culture Highlights Remote-first & flexible hours with 25+ PTO days and comprehensive health cover. Annual training budget & certification sponsorship (Databricks, AWS, Azure) to fuel continuous learning. Inclusive, impact-focused culture where engineers shape the technical roadmap and mentor a vibrant data community Skills: data modeling,big data technologies,team leadership,agile methodologies,performance tuning,data,aws,airflow

Posted 3 days ago

Apply

10.0 years

15 - 17 Lacs

India

Remote

Linkedin logo

Note: This is a remote role with occasional office visits. Candidates from Mumbai or Pune will be preferred About The Company A fast-growing enterprise technology consultancy operating at the intersection of cloud computing, big-data engineering and advanced analytics . The team builds high-throughput, real-time data platforms that power AI, BI and digital products for Fortune 500 clients across finance, retail and healthcare. By combining Databricks Lakehouse architecture with modern DevOps practices, they unlock insight at petabyte scale while meeting stringent security and performance SLAs. Role & Responsibilities Architect end-to-end data pipelines (ingestion → transformation → consumption) using Databricks, Spark and cloud object storage. Design scalable data warehouses/marts that enable self-service analytics and ML workloads. Translate logical data models into physical schemas; own database design, partitioning and lifecycle management for cost-efficient performance. Implement, automate and monitor ETL/ELT workflows, ensuring reliability, observability and robust error handling. Tune Spark jobs and SQL queries, optimizing cluster configurations and indexing strategies to achieve sub-second response times. Provide production support and continuous improvement for existing data assets, championing best practices and mentoring peers. Skills & Qualifications Must-Have 6–10 years building production-grade data platforms, including 3 years+ hands-on Apache Spark/Databricks experience. Expert proficiency in PySpark, Python and advanced SQL, with a track record of performance-tuning distributed jobs. Demonstrated ability to model data warehouses/marts and orchestrate ETL/ELT pipelines with tools such as Airflow or dbt. Hands-on with at least one major cloud platform (AWS or Azure) and modern lakehouse / data-lake patterns. Strong problem-solving skills, DevOps mindset and commitment to code quality; comfortable mentoring fellow engineers. Preferred Deep familiarity with the AWS analytics stack (Redshift, Glue, S3) or the broader Hadoop ecosystem. Bachelor’s or Master’s degree in Computer Science, Engineering or a related field. Experience building streaming pipelines (Kafka, Kinesis, Delta Live Tables) and real-time analytics solutions. Exposure to ML feature stores, MLOps workflows and data-governance/compliance frameworks. Relevant professional certifications (Databricks, AWS, Azure) or notable open-source contributions. Benefits & Culture Highlights Remote-first & flexible hours with 25+ PTO days and comprehensive health cover. Annual training budget & certification sponsorship (Databricks, AWS, Azure) to fuel continuous learning. Inclusive, impact-focused culture where engineers shape the technical roadmap and mentor a vibrant data community Skills: data modeling,big data technologies,team leadership,agile methodologies,performance tuning,data,aws,airflow

Posted 3 days ago

Apply

10.0 years

15 - 17 Lacs

India

Remote

Linkedin logo

Note: This is a remote role with occasional office visits. Candidates from Mumbai or Pune will be preferred About The Company A fast-growing enterprise technology consultancy operating at the intersection of cloud computing, big-data engineering and advanced analytics . The team builds high-throughput, real-time data platforms that power AI, BI and digital products for Fortune 500 clients across finance, retail and healthcare. By combining Databricks Lakehouse architecture with modern DevOps practices, they unlock insight at petabyte scale while meeting stringent security and performance SLAs. Role & Responsibilities Architect end-to-end data pipelines (ingestion → transformation → consumption) using Databricks, Spark and cloud object storage. Design scalable data warehouses/marts that enable self-service analytics and ML workloads. Translate logical data models into physical schemas; own database design, partitioning and lifecycle management for cost-efficient performance. Implement, automate and monitor ETL/ELT workflows, ensuring reliability, observability and robust error handling. Tune Spark jobs and SQL queries, optimizing cluster configurations and indexing strategies to achieve sub-second response times. Provide production support and continuous improvement for existing data assets, championing best practices and mentoring peers. Skills & Qualifications Must-Have 6–10 years building production-grade data platforms, including 3 years+ hands-on Apache Spark/Databricks experience. Expert proficiency in PySpark, Python and advanced SQL, with a track record of performance-tuning distributed jobs. Demonstrated ability to model data warehouses/marts and orchestrate ETL/ELT pipelines with tools such as Airflow or dbt. Hands-on with at least one major cloud platform (AWS or Azure) and modern lakehouse / data-lake patterns. Strong problem-solving skills, DevOps mindset and commitment to code quality; comfortable mentoring fellow engineers. Preferred Deep familiarity with the AWS analytics stack (Redshift, Glue, S3) or the broader Hadoop ecosystem. Bachelor’s or Master’s degree in Computer Science, Engineering or a related field. Experience building streaming pipelines (Kafka, Kinesis, Delta Live Tables) and real-time analytics solutions. Exposure to ML feature stores, MLOps workflows and data-governance/compliance frameworks. Relevant professional certifications (Databricks, AWS, Azure) or notable open-source contributions. Benefits & Culture Highlights Remote-first & flexible hours with 25+ PTO days and comprehensive health cover. Annual training budget & certification sponsorship (Databricks, AWS, Azure) to fuel continuous learning. Inclusive, impact-focused culture where engineers shape the technical roadmap and mentor a vibrant data community Skills: data modeling,big data technologies,team leadership,agile methodologies,performance tuning,data,aws,airflow

Posted 3 days ago

Apply

10.0 years

15 - 17 Lacs

India

Remote

Linkedin logo

Note: This is a remote role with occasional office visits. Candidates from Mumbai or Pune will be preferred About The Company A fast-growing enterprise technology consultancy operating at the intersection of cloud computing, big-data engineering and advanced analytics . The team builds high-throughput, real-time data platforms that power AI, BI and digital products for Fortune 500 clients across finance, retail and healthcare. By combining Databricks Lakehouse architecture with modern DevOps practices, they unlock insight at petabyte scale while meeting stringent security and performance SLAs. Role & Responsibilities Architect end-to-end data pipelines (ingestion → transformation → consumption) using Databricks, Spark and cloud object storage. Design scalable data warehouses/marts that enable self-service analytics and ML workloads. Translate logical data models into physical schemas; own database design, partitioning and lifecycle management for cost-efficient performance. Implement, automate and monitor ETL/ELT workflows, ensuring reliability, observability and robust error handling. Tune Spark jobs and SQL queries, optimizing cluster configurations and indexing strategies to achieve sub-second response times. Provide production support and continuous improvement for existing data assets, championing best practices and mentoring peers. Skills & Qualifications Must-Have 6–10 years building production-grade data platforms, including 3 years+ hands-on Apache Spark/Databricks experience. Expert proficiency in PySpark, Python and advanced SQL, with a track record of performance-tuning distributed jobs. Demonstrated ability to model data warehouses/marts and orchestrate ETL/ELT pipelines with tools such as Airflow or dbt. Hands-on with at least one major cloud platform (AWS or Azure) and modern lakehouse / data-lake patterns. Strong problem-solving skills, DevOps mindset and commitment to code quality; comfortable mentoring fellow engineers. Preferred Deep familiarity with the AWS analytics stack (Redshift, Glue, S3) or the broader Hadoop ecosystem. Bachelor’s or Master’s degree in Computer Science, Engineering or a related field. Experience building streaming pipelines (Kafka, Kinesis, Delta Live Tables) and real-time analytics solutions. Exposure to ML feature stores, MLOps workflows and data-governance/compliance frameworks. Relevant professional certifications (Databricks, AWS, Azure) or notable open-source contributions. Benefits & Culture Highlights Remote-first & flexible hours with 25+ PTO days and comprehensive health cover. Annual training budget & certification sponsorship (Databricks, AWS, Azure) to fuel continuous learning. Inclusive, impact-focused culture where engineers shape the technical roadmap and mentor a vibrant data community Skills: data modeling,big data technologies,team leadership,agile methodologies,performance tuning,data,aws,airflow

Posted 3 days ago

Apply

0 years

0 Lacs

Andhra Pradesh, India

On-site

Linkedin logo

P2-C3-STS Experience with AWS CDK development using TypeScript or Python. Experience with AWS services like s3, Redshift, RDS, EC2, Glue, CloudFormation etc.., Experience with lambda development using python Experience with access management using IAM roles. Experience with code management using GitHub Experience with Data pipeline orchestration using Apache Airflow. Experience with monitoring tools like CloudWatch Fluent in programming languages such as SQL, Python and TypeScript

Posted 3 days ago

Apply

3.0 - 8.0 years

10 - 20 Lacs

Pune

Remote

Naukri logo

Role: Retool Developer (Data Engineer) Location: Remote (Anywhere in India only) Shift: US - CST Time Department: Data Engineering Job Summary: We are looking for a highly skilled and experienced Retool Expert to join our team. In this role, you will be responsible for designing, developing, and maintaining internal tools and dashboards using the Retool platform. You will work closely with various teams to understand their needs and build effective solutions that improve operational efficiency and data visibility. Job Responsibilities: 1. Design and build custom internal tools, dashboards, and applications using Retool to meet specific business requirements. 2. Connect Retool applications to various data sources, including SQL databases, Realtime Queues, Data Lakes, and APIs. 3. Write and optimize SQL queries to retrieve, manipulate, and present data effectively within Retool. 4. Utilize basic JavaScript to enhance Retool application functionality, create custom logic, and interact with data. 5. Develop interactive data visualizations and reports within Retool to provide clear insights and support data-driven decision-making. 6. Collaborate with business stakeholders and other technical teams to gather requirements, provide technical guidance, and ensure solutions align with business goals. 7. Troubleshoot, debug, and optimize Retool applications for performance and scalability. 8. Maintain clear documentation of Retool applications, including design, data connections, and logic. 9. Stay up-to-date with the latest Retool features and best practices to continually improve our internal tools. Qualifications: 1. Strong proficiency in SQL for data querying, manipulation, and database management. 2. Solid understanding of basic JavaScript for scripting, custom logic, and enhancing user experience within Retool. 3. Demonstrated expertise in data visualization, including the ability to create clear, insightful, and user-friendly charts and graphs. 4. Ability to translate business needs into technical solutions using Retool. 5. Excellent problem-solving skills and attention to detail. 6. Strong communication and collaboration skills to work effectively with technical and non-technical teams. Preferred Qualifications (Bonus Points): 1. Experience with other low-code/no-code platforms. 2. Familiarity with UI/UX principles for building intuitive interfaces. Why Join Atidiv? 100% Remote | Flexible Work Culture Opportunity to work with cutting-edge technologies Collaborative, supportive team that values innovation and ownership Work on high-impact, global projects

Posted 3 days ago

Apply

0.0 - 1.0 years

5 - 9 Lacs

Kolkata

Work from Office

Naukri logo

Key Responsibilities. Collaborate with data scientists to support end-to-end ML model development, including data preparation, feature engineering, training, and evaluation.. Build and maintain automated pipelines for data ingestion, transformation, and model scoring using Python and SQL.. Assist in model deployment using CI/CD pipelines (e.g., Jenkins) and ensure smooth integration with production systems.. Develop tools and scripts to support model monitoring, logging, and retraining workflows.. Work with data from relational databases (RDS, Redshift) and preprocess it for model consumption.. Analyze pipeline performance and model behavior; identify opportunities for optimization and refactoring.. Contribute to the development of a feature store and standardized processes to support reproducible data science.. Required Skills & Experience. 1-3 years of hands-on experience in Python programming for data science or ML engineering tasks.. Solid understanding of machine learning workflows, including model training, validation, deployment, and monitoring.. Proficient in SQL and working with structured data from sources like Redshift, RDS, etc.. Familiarity with ETL pipelines and data transformation best practices.. Basic understanding of ML model deployment strategies and CI/CD tools like Jenkins.. Strong analytical mindset with the ability to interpret and debug data/model issues.. Preferred Qualifications. Exposure to frameworks like scikit-learn, XGBoost, LightGBM, or similar.. Knowledge of ML lifecycle tools (e.g., MLflow, Ray).. Familiarity with cloud platforms (AWS preferred) and scalable infrastructure..

Posted 3 days ago

Apply

5.0 - 8.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

Key Responsibilities: Consult with clients to understand their business goals and data challenges Design and implement data collection, cleansing, and transformation processes Analyze structured and unstructured data using tools like SQL, Python, or R Create dashboards, reports, and visualizations using tools like Power BI, Tableau, or Looker Provide strategic recommendations based on data analysis and business objectives Work with cross-functional teams (e.g., IT, Marketing, Finance) to align data strategies Ensure data quality, integrity, and compliance with relevant data governance standards Document findings, processes, and methodologies for stakeholders Requirements: Bachelor’s degree in Data Science, Computer Science, Statistics, Business Analytics, or related field 5-8 years of experience in data analytics, business intelligence, or data consulting Proficiency in SQL and at least one programming language (e.g., Python, R) Experience with data visualization tools (e.g., Tableau, Power BI, Qlik) Strong problem-solving skills and business acumen Excellent communication skills to present complex data insights to non-technical audiences Knowledge of cloud platforms (AWS, Azure, or GCP) is a plus Preferred Qualifications: Experience with data warehousing (e.g., Snowflake, Redshift, BigQuery) Knowledge of machine learning concepts or tools Experience working in a client-facing consulting role Certifications in data analytics or cloud platforms (e.g., Google Data Engineer, Microsoft Azure Data Scientist)

Posted 3 days ago

Apply

6.0 years

0 Lacs

India

Remote

Linkedin logo

About BeGig BeGig is the leading tech freelancing marketplace. We empower innovative, early-stage, non-tech founders to bring their visions to life by connecting them with top-tier freelance talent. By joining BeGig, you’re not just taking on one role—you’re signing up for a platform that will continuously match you with high-impact opportunities tailored to your expertise. Your Opportunity Join our network as a Data Analyst and collaborate with forward-thinking startups to transform raw data into actionable insights. You’ll help drive data-informed decisions through deep analysis, visualization, and reporting—impacting product strategies, customer experience, and business growth. Enjoy the flexibility to work remotely, set your own hours, and take on projects aligned with your domain and interests. Role Overview As a Data Analyst , you will: Analyze Data for Insights: Collect, clean, and analyze structured and unstructured data to identify patterns and trends. Visualize & Communicate Results: Create dashboards and visual reports to support business decisions. Collaborate Across Teams: Work with product, engineering, marketing, and leadership teams to define KPIs and improve processes using data. What You’ll Do 📊 Data Analysis & Reporting Interpret complex datasets to extract meaningful business insights. Conduct exploratory and statistical data analysis. Build and maintain dashboards using tools like Power BI, Tableau, or Looker. 📂 Data Management & Quality Collect, clean, and validate data from multiple sources. Perform data audits to ensure accuracy and completeness. Automate reports and build repeatable data processes. 🤝 Stakeholder Collaboration Translate business questions into analytical solutions. Present findings clearly and confidently to both technical and non-technical stakeholders. Partner with data engineers and developers to improve data pipelines and accessibility. Technical Requirements & Skills Experience: 3–6 years in a Data Analyst or similar role. Tools: Proficient in SQL, Excel, and at least one BI tool (Tableau, Power BI, Looker). Languages: Experience with Python or R for data analysis (preferred). Data Handling: Strong understanding of databases, joins, indexing, and data cleaning. Bonus: Familiarity with data warehousing (e.g., Snowflake, Redshift), A/B testing, or Google Analytics. What We’re Looking For A sharp analytical thinker with a passion for making data useful. A self-starter comfortable working independently and managing multiple projects. A team player who communicates clearly and thrives in collaborative environments. A freelancer excited by innovation and fast-paced problem solving. Why Join Us? 🚀 Immediate Impact: Work on high-visibility projects driving business decisions. 🌍 Remote & Flexible: Set your own schedule and work style. 🔁 Ongoing Projects: Get matched to future gigs that align with your skills and goals. 💡 Innovative Work: Collaborate with startups building the future of data-driven products. Ready to turn data into impact? Apply now to join BeGig as a Data Analyst and be part of a curated network driving innovation across industries.

Posted 3 days ago

Apply

6.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

🔍 Hiring: Big Data Engineer (AWS + SQL Expertise) 📍 Locations: Chennai (Primary), Gurugram, Pune 💼 Experience Level: A – 6 years | AC – 8 years ✅ Key Skills Required (Must Have): Cloud: AWS Big Data Stack – S3, Glue, Athena, EMR Programming: Python, Spark, SQL, Mulesoft, Talend, dbt Data Warehousing & ETL: Redshift / Snowflake, ETL/ELT pipeline development Data Handling: Structured & semi-structured data transformation Process & Optimization: Data ingestion, transformation, performance tuning ✅ Preferred Skills (Good to Have): AWS Data Engineer Certification Experience with Spark, Hive, Kafka, Kinesis, Airflow Familiarity with ServiceNow, Jira (ITSM tools) Data modeling experience 🎯 Key Responsibilities: Build scalable data pipelines (ETL/ELT) for diverse data sources Clean, validate, and transform data for business consumption Develop data model objects (views, tables) for downstream usage Optimize storage and query performance in AWS environments Collaborate with business & technical stakeholders Support code reviews, documentation, and incident resolution. 🎓 Qualifications: Bachelor's in Computer Science, IT, or related field (or equivalent experience) Strong problem-solving, communication, and documentation skills

Posted 3 days ago

Apply

12.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Linkedin logo

Skill : Technical Business Analyst Exp : 10+ Yrs Location : Gurgaon Key Responsibilities Requirements Gathering & Analysis – Engage with stakeholders (business users, risk, compliance, operations) to elicit detailed functional and non-functional requirements. – Translate complex banking processes into clear user stories, process flows, and specification documents. Solution Design & Validation – Collaborate with data engineers, architects, and BI teams to define data models, ETL pipelines, and integration points. – Validate technical designs against business needs and regulatory requirements (e.g., KYC, AML, Basel norms). Development Support – Leverage your hands-on development experience (SQL, Python/Java) to prototype data extracts, transformations, and reports. – Assist the development team with code reviews, test data setup, and troubleshooting. Testing & Quality Assurance – Define acceptance criteria; design and execute system, integration, and user-acceptance test cases. – Coordinate defect triage and ensure timely resolution. Documentation & Training – Maintain up-to-date functional specifications, data dictionaries, and user guides. – Conduct workshops and training sessions for users and support teams. Project & Stakeholder Management – Track project deliverables, highlight risks and dependencies, and communicate progress. – Act as the primary point of contact between business, IT, and external vendors. Required Skills & Experience Domain Expertise: – 10–12 years’ experience in banking (retail, corporate, or investment) with focus on data-driven initiatives. – Strong understanding of banking products (loans, deposits, payments) and regulatory landscape. Technical Proficiency: – Hands-on development background: advanced SQL; scripting in Python or Java. – Experience designing and supporting ETL processes (Informatica, Talend, or equivalent). – Familiarity with data warehousing concepts, dimensional modeling, and metadata management. Exposure to cloud data services (AWS Redshift, Azure Synapse, GCP Big Query) is a plus. Analytical & Process Skills: – Solid experience in data profiling, data quality assessment, and root-cause analysis. – Comfortable with Agile methodologies; adept at sprint planning and backlog management. Communication & Collaboration: – Excellent verbal and written skills; able to explain technical concepts to non-technical audiences. – Proven track record of stakeholder management at all levels.

Posted 3 days ago

Apply

4.0 years

0 Lacs

Bengaluru, Karnataka, India

Remote

Linkedin logo

Job Role: Power BI & Data Visualization Specialist Experience: 4+ years Location: Pan India (Remote) Notice period: Immediate - 15 days only Key Responsibilities: Design and develop Power BI dashboards, reports, and data visualizations tailored to business requirements. Gather and analyse user requirements and translate them into functional BI solutions. Connect Power BI to various enterprise data sources including SAP BW/HANA, Oracle, Hyperion, spreadsheets, and cloud platforms like Snowflake and Amazon Redshift. Develop and maintain semantic data models and implement efficient data transformations using Power Query. Optimize report/dashboard performance, scalability, and user experience. Work closely with business stakeholders, analysts, and IT teams to ensure data accuracy and clarity in reporting. Troubleshoot data issues and manage the quality and integrity of reported data. Maintain and enhance existing reports and dashboards to meet evolving business needs. Provide documentation and training to end-users and promote self-service BI usage across the organization. Follow data governance and security protocols to ensure compliance. Must-Have Skills: Strong understanding of data visualization principles and UX/UI design. Proficiency in DAX, Power Query, and data modelling best practices. Experience with enterprise systems such as SAP BW/HANA, Oracle, and Hyperion. Skilled in accessing and transforming data from spreadsheets, Snowflake, Amazon Redshift, and other cloud data platforms. Strong SQL skills with understanding of relational and columnar databases. Familiarity with hierarchical and financial planning data structures. Ability to work with large, complex datasets and deliver user-friendly dashboards. Excellent communication and presentation skills for both technical and non-technical audiences.

Posted 3 days ago

Apply

3.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

About Flutter Entertainment Flutter Entertainment is the world’s largest sports betting and iGaming operator with 13.9 million average monthly players worldwide and an annual revenue of $14Bn in 2024. We have a portfolio of iconic brands, including Paddy Power, Betfair, FanDuel, PokerStars, Junglee Games and Sportsbet. Flutter Entertainment is listed on both the New York Stock Exchange (NYSE) and the London Stock Exchange (LSE). In 2024, we were recognized in TIME’s 100 Most Influential Companies under the 'Pioneers' category—a testament to our innovation and impact. Our ambition is to transform global gaming and betting to deliver long-term growth and a positive, sustainable future for our sector. Together, we are Changing the Game! Working at Flutter is a chance to work with a growing portfolio of brands across a range of opportunities. We will support you every step of the way to help you grow. Just like our brands, we ensure our people have everything they need to succeed. Flutter Entertainment India Our Hyderabad office, located in one of India’s premier technology parks is the Global Capability Center for Flutter Entertainment. A center of expertise and innovation, this hub is now home to over 900+ talented colleagues working across Customer Service Operations, Data and Technology, Finance Operations, HR Operations, Procurement Operations, and other key enabling functions. We are committed to crafting impactful solutions for all our brands and divisions to power Flutter's incredible growth and global impact. With the scale of a leader and the mindset of a challenger, we’re dedicated to creating a brighter future for our customers, colleagues, and communities. Overview Of The Role We are looking for a Data Engineer with 3 to 5 years of experience to help design, build, and maintain the next-generation data platform for our Sisal team . This role will leverage modern cloud technologies, infrastructure as code (IaC), and advanced data processing techniques to drive business value from our data assets. You will collaborate with cross-functional teams to ensure data availability, quality, and reliability while applying expertise in Databricks on AWS, Python, CI/CD, and Agile methodologies to deliver scalable and efficient solutions. Key Responsibilities Design and implement scalable ETL processes and data pipelines that integrate with diverse data sources Build streaming and batch data processing solutions using Databricks on AWS Develop and optimize Lakehouse architectures ; work with big data access patterns to process large-scale datasets efficiently Drive automation and efficiency using CI/CD pipelines , IaC , and DevOps practices. Improve database performance, implement best practices for data governance, and enhance data security. Required Skills 3 to 5 years of experience in data engineering and ETL pipeline development . Hands-on experience with Databricks on AWS . Proven experience designing and implementing scalable data warehousing solutions . Expertise in AWS data services , particularly DynamoDB, Glue, Athena, EMR, Redshift, Lambda, and Kinesis. Strong programming skills in Python (PySpark/Spark SQL experience preferred) and Java. Desireable / Preferred Skills Knowledge of streaming data processing (e.g., Kafka, Kinesis, Spark Streaming). Experience with CI/CD tools and automation ( Git, Jenkins, Ansible, Shell Scripting, Unit/Integration Testing ). Familiarity with Agile methodologies and DevOps best practices . Benefits We Offer Access to Learnerbly, Udemy , and a Self-Development Fund for upskilling. Career growth through Internal Mobility Programs . Comprehensive Health Insurance for you and dependents. Well-Being Fund and 24/7 Assistance Program for holistic wellness. Hybrid Model : 2 office days/week with flexible leave policies, including maternity, paternity, and sabbaticals. Free Meals, Cab Allowance , and a Home Office Setup Allowance. Employer PF Contribution , gratuity, Personal Accident & Life Insurance. Sharesave Plan to purchase discounted company shares. Volunteering Leave and Team Events to build connections. Recognition through the Kudos Platform and Referral Rewards . Why Choose Us Flutter is an equal-opportunity employer and values the unique perspectives and experiences that everyone brings. Our message to colleagues and stakeholders is clear: everyone is welcome, and every voice matters. We have ambitious growth plans and goals for the future. Here's an opportunity for you to play a pivotal role in shaping the future of Flutter Entertainment India .

Posted 3 days ago

Apply

3.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

About Flutter Entertainment Flutter Entertainment is the world’s largest sports betting and iGaming operator with 13.9 million average monthly players worldwide and an annual revenue of $14Bn in 2024. We have a portfolio of iconic brands, including Paddy Power, Betfair, FanDuel, PokerStars, Junglee Games and Sportsbet. Flutter Entertainment is listed on both the New York Stock Exchange (NYSE) and the London Stock Exchange (LSE). In 2024, we were recognized in TIME’s 100 Most Influential Companies under the 'Pioneers' category—a testament to our innovation and impact. Our ambition is to transform global gaming and betting to deliver long-term growth and a positive, sustainable future for our sector. Together, we are Changing the Game! Working at Flutter is a chance to work with a growing portfolio of brands across a range of opportunities. We will support you every step of the way to help you grow. Just like our brands, we ensure our people have everything they need to succeed. Flutter Entertainment India Our Hyderabad office, located in one of India’s premier technology parks is the Global Capability Center for Flutter Entertainment. A center of expertise and innovation, this hub is now home to over 900+ talented colleagues working across Customer Service Operations, Data and Technology, Finance Operations, HR Operations, Procurement Operations, and other key enabling functions. We are committed to crafting impactful solutions for all our brands and divisions to power Flutter's incredible growth and global impact. With the scale of a leader and the mindset of a challenger, we’re dedicated to creating a brighter future for our customers, colleagues, and communities. Overview Of The Role We are looking for a Data Engineer with 3 to 5 years of experience to help design, build, and maintain the next-generation data platform for our Sisal team . This role will leverage modern cloud technologies, infrastructure as code (IaC), and advanced data processing techniques to drive business value from our data assets. You will collaborate with cross-functional teams to ensure data availability, quality, and reliability while applying expertise in Databricks on AWS, Python, CI/CD, and Agile methodologies to deliver scalable and efficient solutions. Key Responsibilities Design and implement scalable ETL processes and data pipelines that integrate with diverse data sources Build streaming and batch data processing solutions using Databricks on AWS Develop and optimize Lakehouse architectures ; work with big data access patterns to process large-scale datasets efficiently Drive automation and efficiency using CI/CD pipelines , IaC , and DevOps practices. Improve database performance, implement best practices for data governance, and enhance data security. Required Skills 3 to 5 years of experience in data engineering and ETL pipeline development . Hands-on experience with Databricks on AWS . Proven experience designing and implementing scalable data warehousing solutions . Expertise in AWS data services , particularly DynamoDB, Glue, Athena, EMR, Redshift, Lambda, and Kinesis. Strong programming skills in Python (PySpark/Spark SQL experience preferred) and Java. Desireable / Preferred Skills Knowledge of streaming data processing (e.g., Kafka, Kinesis, Spark Streaming). Experience with CI/CD tools and automation ( Git, Jenkins, Ansible, Shell Scripting, Unit/Integration Testing ). Familiarity with Agile methodologies and DevOps best practices . Benefits We Offer Access to Learnerbly, Udemy , and a Self-Development Fund for upskilling. Career growth through Internal Mobility Programs . Comprehensive Health Insurance for you and dependents. Well-Being Fund and 24/7 Assistance Program for holistic wellness. Hybrid Model : 2 office days/week with flexible leave policies, including maternity, paternity, and sabbaticals. Free Meals, Cab Allowance , and a Home Office Setup Allowance. Employer PF Contribution , gratuity, Personal Accident & Life Insurance. Sharesave Plan to purchase discounted company shares. Volunteering Leave and Team Events to build connections. Recognition through the Kudos Platform and Referral Rewards . Why Choose Us Flutter is an equal-opportunity employer and values the unique perspectives and experiences that everyone brings. Our message to colleagues and stakeholders is clear: everyone is welcome, and every voice matters. We have ambitious growth plans and goals for the future. Here's an opportunity for you to play a pivotal role in shaping the future of Flutter Entertainment India .

Posted 3 days ago

Apply

3.0 - 5.0 years

5 - 7 Lacs

Hyderabad, Bengaluru

Work from Office

Naukri logo

About our team DEX is a central data org for Kotak Bank which manages entire data experience of Kotak Bank. DEX stands for Kotaks Data Exchange. This org comprises of Data Platform, Data Engineering and Data Governance charter. The org sits closely with Analytics org. DEX is primarily working on greenfield project to revamp entire data platform which is on premise solutions to scalable AWS cloud-based platform. The team is being built ground up which provides great opportunities to technology fellows to build things from scratch and build one of the best-in-class data lake house solutions. The primary skills this team should encompass are Software development skills preferably Python for platform building on AWS; Data engineering Spark (pyspark, sparksql, scala) for ETL development, Advanced SQL and Data modelling for Analytics. As a member of this team, you get opportunity to learn fintech space which is most sought-after domain in current world, be a early member in digital transformation journey of Kotak, learn and leverage technology to build complex data data platform solutions including, real time, micro batch, batch and analytics solutions in a programmatic way and also be futuristic to build systems which can be operated by machines using AI technologies. The data platform org is divided into 3 key verticals Data Platform This Vertical is responsible for building data platform which includes optimized storage for entire bank and building centralized data lake, managed compute and orchestrations framework including concepts of serverless data solutions, managing central data warehouse for extremely high concurrency use cases, building connectors for different sources, building customer feature repository, build cost optimization solutions like EMR optimizers, perform automations and build observability capabilities for Kotaks data platform. The team will also be center for Data Engineering excellence driving trainings and knowledge sharing sessions with large data consumer base within Kotak. Data Engineering This team will own data pipelines for thousands of datasets, be skilled to source data from 100+ source systems and enable data consumptions for 30+ data analytics products. The team will learn and built data models in a config based and programmatic and think big to build one of the most leveraged data model for financial orgs. This team will also enable centralized reporting for Kotak Bank which cuts across multiple products and dimensions. Additionally, the data build by this team will be consumed by 20K + branch consumers, RMs, Branch Managers and all analytics usecases. Data Governance The team will be central data governance team for Kotak bank managing Metadata platforms, Data Privacy, Data Security, Data Stewardship and Data Quality platform. If youve right data skills and are ready for building data lake solutions from scratch for high concurrency systems involving multiple systems then this is the team for you. You day to day role will include Drive business decisions with technical input and lead the team. Design, implement, and support an data infrastructure from scratch. Manage AWS resources, including EC2, EMR, S3, Glue, Redshift, and MWAA. Extract, transform, and load data from various sources using SQL and AWS big data technologies. Explore and learn the latest AWS technologies to enhance capabilities and efficiency. Collaborate with data scientists and BI engineers to adopt best practices in reporting and analysis. Improve ongoing reporting and analysis processes, automating or simplifying self-service support for customers. Build data platforms, data pipelines, or data management and governance tools. BASIC QUALIFICATIONS for Data Engineer Bachelor's degree in Computer Science, Engineering, or a related field 3-5 years of experience in data engineering Strong understanding of AWS technologies, including S3, Redshift, Glue, and EMR Experience with data pipeline tools such as Airflow and Spark Experience with data modeling and data quality best practices Excellent problem-solving and analytical skills Strong communication and teamwork skills Experience in at least one modern scripting or programming language, such as Python, Java, or Scala Strong advanced SQL skills PREFERRED QUALIFICATIONS AWS cloud technologies: Redshift, S3, Glue, EMR, Kinesis, Firehose, Lambda, IAM, Airflow Prior experience in Indian Banking segment and/or Fintech is desired. Experience with Non-relational databases and data stores Building and operating highly available, distributed data processing systems for large datasets Professional software engineering and best practices for the full software development life cycle Designing, developing, and implementing different types of data warehousing layers Leading the design, implementation, and successful delivery of large-scale, critical, or complex data solutions Building scalable data infrastructure and understanding distributed systems concepts SQL, ETL, and data modelling Ensuring the accuracy and availability of data to customers Proficient in at least one scripting or programming language for handling large volume data processing Strong presentation and communications skills. For Managers, Customer centricity, obsession for customer Ability to manage stakeholders (product owners, business stakeholders, cross function teams) to coach agile ways of working. Ability to structure, organize teams, and streamline communication. Prior work experience to execute large scale Data Engineering projects

Posted 3 days ago

Apply

5.0 years

2 - 9 Lacs

Mumbai Metropolitan Region

On-site

Linkedin logo

Senior MIS Analyst Industry: Digital Transformation & Analytics Consulting We empower enterprises to unlock data-driven efficiency by building robust Management Information Systems (MIS) that turn raw operational data into strategic insight. Join our on-site analytics hub in India and steer mission-critical reporting initiatives end-to-end. Role & Responsibilities Own the complete MIS lifecycle—data extraction, transformation, validation, visualization, and scheduled distribution. Design automated dashboards and reports in Excel/Power BI that track KPIs, SLAs, and cost metrics for cross-functional leadership. Write optimized SQL queries and ETL scripts to consolidate data from ERP, CRM, and cloud platforms into a single reporting warehouse. Establish strong data governance, ensuring integrity, version control, and auditability of all reports. Collaborate with finance, operations, and technology teams to gather requirements, translate into reporting specs, and deliver within committed timelines. Mentor junior analysts on advanced Excel functions, VBA macros, and visualization best practices. Skills & Qualifications Must-Have Bachelor’s degree in Information Systems, Computer Science, or equivalent. 5+ years professional experience in MIS or Business Intelligence. Expert-level Excel with pivots, Power Query, and VBA scripting. Proficiency in SQL and relational databases (MySQL/SQL Server/PostgreSQL). Hands-on building interactive dashboards in Power BI or Tableau. Demonstrated ability to translate raw data into executive-ready insights. Preferred Experience with ETL tools (Informatica, Talend) or Python pandas. Knowledge of cloud data stacks (Azure Synapse, AWS Redshift, or GCP BigQuery). Understanding of statistical methods for forecasting and trend analysis. Benefits & Culture Highlights High-ownership role with direct visibility to C-suite decision makers. Continuous learning budget for BI certifications and advanced analytics courses. Collaborative, innovation-first culture that rewards data-driven thinking. Apply now to transform complex datasets into strategic clarity and accelerate your analytics career. Skills: business intelligence,ims,automation,sql,fms,google sheets,data governance,vba,excel,etl,analytics,pms,data visualization,dashboard design,data analysis,advanced,looker,power bi,dashboards

Posted 3 days ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies