Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
8.0 years
0 Lacs
India
Remote
Job Title: ENT DBA Location: Remote Experience: 8+ Years Job Description: We are seeking a skilled and proactive Database Administrator (DBA) with strong SQL Development expertise to manage, optimize, and support our database systems. The ideal candidate will have hands-on experience with cloud-based and on-premises database platforms, with a strong emphasis on AWS RDS, PostgreSQL, Redshift, and SQL Server . A background in developing and optimizing complex SQL queries, stored procedures, and data workflows is essential. Key Responsibilities: Design, implement, and maintain high-performance, scalable, and secure database systems on AWS RDS , PostgreSQL , Redshift , and SQL Server . Develop, review, and optimize complex SQL queries , views, stored procedures, triggers, and functions. Monitor database performance, implement tuning improvements, and ensure high availability and disaster recovery strategies. Collaborate with development and DevOps teams to support application requirements, schema changes, and release cycles. Perform database migrations, upgrades, and patch management. Create and maintain documentation related to database architecture, procedures, and best practices. Implement and maintain data security measures and access controls. Support ETL processes and troubleshoot data pipeline issues as needed. Mandatory Skills & Experience: AWS RDS PostgreSQL Amazon Redshift Microsoft SQL Server Proficiency in SQL development , including performance tuning and query optimization. Experience with backup strategies, replication, monitoring, and high-availability database configurations. Solid understanding of database design principles and best practices. Knowledge of SSIS, SSRS & SSAS development and its management. Knowledge of database partitioning, compression, online performance monitoring/tuning. Experience in database release management process and script review. Knowledge of Database mirroring, AAG and Disaster Recovery procedures. Knowledge in Database monitoring and different monitoring tools. Knowledge in data modeling, database optimization and relational database schemas Knowledge in writing complex SQL queries and debugging through someone else’s code. Experience in managing internal and external MS SQL database security. Knowledge in Database Policies, Certificates, Database Mail, Resource Management Knowledge in SQL Server internals (Memory usage, DMVs, Threads, wait stats, Query Store, SQL Profiler) Knowledge of Cluster Server management and failovers. Knowledge of data modeling (SSARS), Reporting Services (SSRS, Tableau, PowerBI, Athena).
Posted 23 hours ago
0 years
0 Lacs
Pune, Maharashtra, India
On-site
Embark on a transformative journey as a Software Engineer-Full Stack Developer at Barclays, where you'll spearhead the evolution of our digital landscape, driving innovation and excellence. You'll harness cutting-edge technology to revolutionize our digital offerings, ensuring unapparelled customer experiences.Operational Support Systems (OSS) Platform Engineering is a new team within the newly formed Network OSS & Tools functional unit in the Network Product domain at Barclays. The Barclays OSS Platform Engineering team is responsible for the design, build and run of the underlying OSS infrastructure and toolchain across cloud and on-prem that the core systems and tools required to run the Barclays Global Network reside on. To be successful in this role as a Software Engineer -Full Stack Developer you should possess the following skillsets: Demonstrable expertise with front-end and back-end skillsets. Java Proficiency and Spring Ecosystem (Spring MVC, Data JPA, Security etc) with strong SQL and NoSQL integration expertise. React.js and javascript expertise : material UI, Ant design and state management expertise (Redus, Zustand or Context API). Strong knowledge of runtime (virtualisation, containers and Kubernetes) and expertise with test driven development using frameworks like cypress, playwright, selenium etc. Strong knowledge of CI/CD pipelines and tooling : Github Actions, Jenkins, Gitlab CI or similar. Monitoring and Observability - logging/tracing and alerting with knowledge of SRE integrations into opensource tooling like grafana/ELK etc. Some Other Highly Valued Skills Include Expertise building ELT pipelines and cloud/storage integrations - data lake/warehouse integrations (redshift, BigQuery, snowflake etc). Expertise with security (OAuth2, CSRF/XSS protection), secure coding practice and Performance Optimization - JVM tuning, performance profiling, caching, lazy loading, rate limiting and high availability in large datasets. Expertise in Public, Private and Hybrid Cloud technologies (DC, AWS, Azure, GCP etc) and across broad Network domains (physical and wireless) - WAN/SD-WAN/LAN/WLAN etc. You may be assessed on the key critical skills relevant for success in role, such as risk and controls, change and transformation, business acumen strategic thinking and digital and technology, as well as job-specific technical skills. This role is based in our Pune office. Purpose of the role To lead and manage engineering teams, providing technical guidance, mentorship, and support to ensure the delivery of high-quality software solutions, driving technical excellence, fostering a culture of innovation, and collaborating with cross-functional teams to align technical decisions with business objectives. Accountabilities Lead engineering teams effectively, fostering a collaborative and high-performance culture to achieve project goals and meet organizational objectives. Oversee timelines, team allocation, risk management and task prioritization to ensure the successful delivery of solutions within scope, time, and budget. Mentor and support team members' professional growth, conduct performance reviews, provide actionable feedback, and identify opportunities for improvement. Evaluation and enhancement of engineering processes, tools, and methodologies to increase efficiency, streamline workflows, and optimize team productivity. Collaboration with business partners, product managers, designers, and other stakeholders to translate business requirements into technical solutions and ensure a cohesive approach to product development. Enforcement of technology standards, facilitate peer reviews, and implement robust testing practices to ensure the delivery of high-quality solutions. Vice President Expectations To contribute or set strategy, drive requirements and make recommendations for change. Plan resources, budgets, and policies; manage and maintain policies/ processes; deliver continuous improvements and escalate breaches of policies/procedures.. If managing a team, they define jobs and responsibilities, planning for the department’s future needs and operations, counselling employees on performance and contributing to employee pay decisions/changes. They may also lead a number of specialists to influence the operations of a department, in alignment with strategic as well as tactical priorities, while balancing short and long term goals and ensuring that budgets and schedules meet corporate requirements.. If the position has leadership responsibilities, People Leaders are expected to demonstrate a clear set of leadership behaviours to create an environment for colleagues to thrive and deliver to a consistently excellent standard. The four LEAD behaviours are: L – Listen and be authentic, E – Energise and inspire, A – Align across the enterprise, D – Develop others.. OR for an individual contributor, they will be a subject matter expert within own discipline and will guide technical direction. They will lead collaborative, multi-year assignments and guide team members through structured assignments, identify the need for the inclusion of other areas of specialisation to complete assignments. They will train, guide and coach less experienced specialists and provide information affecting long term profits, organisational risks and strategic decisions.. Advise key stakeholders, including functional leadership teams and senior management on functional and cross functional areas of impact and alignment. Manage and mitigate risks through assessment, in support of the control and governance agenda. Demonstrate leadership and accountability for managing risk and strengthening controls in relation to the work your team does. Demonstrate comprehensive understanding of the organisation functions to contribute to achieving the goals of the business. Collaborate with other areas of work, for business aligned support areas to keep up to speed with business activity and the business strategies. Create solutions based on sophisticated analytical thought comparing and selecting complex alternatives. In-depth analysis with interpretative thinking will be required to define problems and develop innovative solutions. Adopt and include the outcomes of extensive research in problem solving processes. Seek out, build and maintain trusting relationships and partnerships with internal and external stakeholders in order to accomplish key business objectives, using influencing and negotiating skills to achieve outcomes. All colleagues will be expected to demonstrate the Barclays Values of Respect, Integrity, Service, Excellence and Stewardship – our moral compass, helping us do what we believe is right. They will also be expected to demonstrate the Barclays Mindset – to Empower, Challenge and Drive – the operating manual for how we behave.
Posted 23 hours ago
4.0 - 5.0 years
0 Lacs
Indore, Madhya Pradesh, India
On-site
Job Overview Seeking an experienced Data Engineer who had 4 to 5 years experience to design, build, and maintain scalable data infrastructure and pipelines. You'll work with cross-functional teams to ensure reliable data flow from various sources to analytics platforms, enabling data-driven decision making across the organization. Key Responsibilities Data Pipeline Development : Design and implement robust ETL/ELT pipelines using tools like Apache Airflow, Spark, or cloud-native solutions. Build real-time and batch processing systems to handle high-volume data streams. Optimize data workflows for performance, reliability, and cost-effectiveness. Infrastructure & Architecture Develop and maintain data warehouses and data lakes using platforms like Snowflake, Redshift, BigQuery, or Databricks. Implement data modeling best practices including dimensional modeling and schema design. Architect scalable solutions on cloud platforms (AWS, GCP, Azure). Data Quality & Governance Implement data quality checks, monitoring, and alerting systems. Establish data lineage tracking and metadata management. Ensure compliance with data privacy regulations and security standards. Collaboration & Support Partner with data scientists, analysts, and business stakeholders to understand requirements. Provide technical guidance on data architecture decisions. Mentor junior engineers and contribute to team knowledge sharing. Required Qualifications Technical Skills : 4-5 years of experience in data engineering or related field. Proficiency in Python, SQL, and at least one other programming language (Java, Scala, Go). Strong experience with big data technologies (Spark, Kafka, Hadoop ecosystem). Hands-on experience with cloud platforms and their data services. Knowledge of containerization (Docker, Kubernetes) and infrastructure as code. (ref:hirist.tech)
Posted 1 day ago
2.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Description * 12 Month, Fixed Term Contract* As a Business Intelligence Engineer, you will be deciphering our customers' ever-evolving needs and shaping solutions that elevate their experience with Amazon. A Successful Candidate Will Possess Strong analytical and problem-solving skills, leveraging data and metrics to inform strategic decisions. Impeccable attention to detail and a clear and compelling communication skills, capable of articulating data insights to diverse stakeholders. Key job responsibilities Deliver on all analytics requirements across business areas. Own the design, development, and maintenance of ongoing metrics, reports, analyses, dashboards, data pipelines etc. to drive key business decisions. Ensure data accuracy by validating data for new and existing tools. Perform both ad-hoc and strategic data analyses Build various data visualizations to tell the story and let the data speak for itself. Recognize and adopt best practices in reporting and analysis: data integrity, test design, analysis, validation, and documentation. Build automation to reduce dependencies on manual data pulls etc. A day in the life A day in the life of Business Intelligence Engineer will include working closely with Product Managers and Software Developers. Working on building dashboards, performing root cause analysis and sharing actionable insights with the stakeholders to enable data-informed decision making. Lead reporting and analytics initiatives to drive data-informed decision making. Design, develop, and maintain ETL processes and data visualization dashboards using Amazon QuickSight. Transform complex business requirements into actionable analytics solutions. Basic Qualifications 2+ years of analyzing and interpreting data with Redshift, Oracle, NoSQL etc. experience Experience with data visualization using Tableau, Quicksight, or similar tools Experience with one or more industry analytics visualization tools (e.g. Excel, Tableau, QuickSight, MicroStrategy, PowerBI) and statistical methods (e.g. t-test, Chi-squared) Experience with scripting language (e.g., Python, Java, or R) Preferred Qualifications Master's degree, or Advanced technical degree Knowledge of data modeling and data pipeline design Experience with statistical analysis, co-relation analysis Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner. Company - ADCI MAA 15 SEZ Job ID: A3018486
Posted 1 day ago
8.0 years
0 Lacs
India
Remote
Position: Senior AWS Data Engineer Location: Remote Salary: Open Work Timings: 2:30 PM to 11:30 PM IST Need someone who can join immediately or in 15 days Responsibilities: Design, develop, and deploy end-to-end data pipelines on AWS cloud infrastructure using services such as Amazon S3, AWS Glue, AWS Lambda, Amazon Redshift, etc. Implement data processing and transformation workflows using Apache Spark, and SQL to support analytics and reporting requirements. Build and maintain orchestration workflows to automate data pipeline execution, scheduling, and monitoring. Collaborate with analysts, and business stakeholders to understand data requirements and deliver scalable data solutions. Optimize data pipelines for performance, reliability, and cost-effectiveness, leveraging AWS best practices and cloud-native technologies. Qualification: Minimum 8+ years of experience building and deploying large-scale data processing pipelines in a production environment. Hands-on experience in designing and building data pipelines on AWS cloud infrastructure. Strong proficiency in AWS services such as Amazon S3, AWS Glue, AWS Lambda, Amazon Redshift, etc. Strong experience with Apache Spark for data processing and analytics. Hands-on experience on orchestrating and scheduling data pipelines using AppFlow, Event Bridge and Lambda. Solid understanding of data modeling, database design principles, and SQL and Spark SQL.
Posted 1 day ago
6.0 - 8.0 years
8 - 12 Lacs
Mumbai, Bengaluru, Delhi / NCR
Work from Office
Skills Required Experience in designing and building a serverless data lake solution using a layered components architecture including Ingestion, Storage, processing, Security & Governance, Data cataloguing & Search, Consumption layer. Hands on experience in AWS serverless technologies such as Lake formation, Glue, Glue Python, Glue Workflows, Step Functions, S3, Redshift, Quick sight, Athena, AWS Lambda, Kinesis. Must have experience in Glue. Experience in design, build, orchestration and deploy multi-step data processing pipelines using Python and Java Experience in managing source data access security, configuring authentication and authorisation, enforcing data policies and standards. Experience in AWS Environment setup and configuration. Minimum 6 years of relevant experience with atleast 3 years in building solutions using AWS Ability to work under pressure and commitment to meet customer expectations Location-Delhi NCR,Bangalore,Chennai,Pune,Kolkata,Ahmedabad,Mumbai,Hyderabad
Posted 1 day ago
10.0 years
0 Lacs
India
Remote
Join phData, a dynamic and innovative leader in the modern data stack. We partner with major cloud data platforms like Snowflake, AWS, Azure, GCP, Fivetran, Pinecone, Glean and dbt to deliver cutting-edge services and solutions. We're committed to helping global enterprises overcome their toughest data challenges. phData is a remote-first global company with employees based in the United States, Latin America and India. We celebrate the culture of each of our team members and foster a community of technological curiosity, ownership and trust. Even though we're growing extremely fast, we maintain a casual, exciting work environment. We hire top performers and allow you the autonomy to deliver results. 5x Snowflake Partner of the Year (2020, 2021, 2022, 2023, 2024) Fivetran, dbt, Atlation, Matillion Partner of the Year #1 Partner in Snowflake Advanced Certifications 600+ Expert Cloud Certifications (Sigma, AWS, Azure, Dataiku, etc) Recognized as an award-winning workplace in US, India and LATAM About You: You are a strategic thinker with a passion for building scalable, cloud-native solutions. With a strong technical background, you excel in driving data platform implementations and managing complex projects. Your expertise in platforms like Snowflake, AWS, and Azure is complemented by a deep understanding of how to align data infrastructure with strategic business goals. You have a proven track record in professional consulting, building scalable, secure solutions that optimize data platform performance. You thrive in environments that challenge you to think critically, lead technical teams, and optimize systems for continuous improvement. As a part of our Managed Services team, you are driven by a commitment to customer success and long-term growth. You embrace best practices, champion effective platform management, and contribute to the evolution of data platforms by promoting phData’s Operational Maturity Framework. Your approach is hands-on, collaborative, and results-oriented, making you a key player in ensuring clients' ongoing success. Key Responsibilities: Technical Leadership: Lead the design, architecture, and implementation of large-scale data platform projects on Snowflake, AWS, and Azure. Guide technical teams through data migration, integration, and performance optimization. Platform Optimization, Integration and Automation: Identify opportunities for automation and performance optimization to enhance client’s data platform capabilities. Lead data migration to cloud platforms (e.g., Snowflake, Redshift), ensuring smooth integration of data lakes, data warehouses, and distributed systems. Platform Security: Setting up a client's data platform with industry best practices & robust security standards including Data Governance. Process Engineering: Adapt to ongoing changes in platform environment, data, and technology by debugging, enhancing features, and re-engineering as needed. Plan ahead for changes to upstream data sources to minimize impact on users and systems, ensuring scalable adoption of modern data platforms. Consulting Leadership: Manage multiple customer engagements to ensure timely project delivery, optimize operations to prevent resource overruns, and maximize platform ROI. Serve as a trusted advisor, offering strategic guidance on data platform optimization and addressing technical challenges to meet business goals. Cross-functional Collaboration & Mentorship: Partner with sales, engineering, and support teams to ensure seamless project delivery and high customer satisfaction. Provide mentorship and technical guidance to junior engineers, promoting a culture of continuous learning and excellence. Key Skills and Experience: Required: Solutions Architecture: 10 years of hands-on experience of architecting, designing, implementing, and managing cloud-native data platforms and solutions. Client-Facing Skills: Strong communication skills, with experience presenting to executive stakeholders and creating detailed solution documentation. Data Platforms: Extensive experience managing enterprise data platforms (e.g., Snowflake, Redshift, Azure Data Warehouse) with strong skills in performance tuning and troubleshooting. Cloud Expertise: Deep knowledge of AWS, Azure, and Snowflake ecosystems, including services like S3, ADLS, Kinesis, Data Factory, DBT, and Kafka. IT Operations: Build and manage scalable, secure data platforms aligned with strategic goals. Excel at optimizing systems and driving continuous improvement in platform performance. SQL Mastery: Advanced proficiency in Microsoft SQL, including writing, debugging, and optimizing queries. DevOps and Infrastructure: Proficiency in infrastructure-as-code (IaC) tools like Terraform or CloudFormation, and experience with CI/CD pipelines (e.g., Bitbucket, GitHub). Data Integration Tools: Expertise with tools such as AWS Data Migration Services, Azure Data Factory, Matillion, Fivetran, or Spark. Preferred: Certifications: Snowflake SnowPro Core certification or equivalent. Python Proficiency: Experience using Python for task automation and operational efficiency. CI/CD Expertise: Hands-on experience with automated deployment frameworks, such as Flyway or Liquibase. Education: A Bachelor’s degree in Computer Science, Engineering, or a related field is highly preferred Advanced degrees or equivalent certifications are preferred. Why phData? We offer: Remote-First Workplace Medical Insurance for Self & Family Medical Insurance for Parents Term Life & Personal Accident Wellness Allowance Broadband Reimbursement 2-4 week bootcamp and provide continuous learning opportunities to enhance your skills and expertise Other perks include paid certifications, professional development allowance and additional compensation for creating company-approved content (dashboards, blogs, videos, whitepapers, etc.) phData celebrates diversity and is committed to creating an inclusive environment for all employees. Our approach helps us to build a winning team that represents a variety of backgrounds, perspectives, and abilities. So, regardless of how your diversity expresses itself, you can find a home here at phData. We are proud to be an equal opportunity employer. We prohibit discrimination and harassment of any kind based on race, color, religion, national origin, sex (including pregnancy), sexual orientation, gender identity, gender expression, age, veteran status, genetic information, disability, or other applicable legally protected characteristics. If you would like to request an accommodation due to a disability, please contact us at People Operations.
Posted 1 day ago
5.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Sonatype is the software supply chain security company. We provide the world’s best end-to-end software supply chain security solution, combining the only proactive protection against malicious open source, the only enterprise grade SBOM management and the leading open source dependency management platform. This empowers enterprises to create and maintain secure, quality, and innovative software at scale. As founders of Nexus Repository and stewards of Maven Central, the world’s largest repository of Java open-source software, we are software pioneers and our open source expertise is unmatched. We empower innovation with an unparalleled commitment to build faster, safer software and harness AI and data intelligence to mitigate risk, maximize efficiencies, and drive powerful software development. More than 2,000 organizations, including 70% of the Fortune 100 and 15 million software developers, rely on Sonatype to optimize their software supply chains. The Opportunity We’re looking for a Senior Data Engineer to join our growing Data Platform team. You’ll play a key role in designing and scaling the infrastructure and pipelines that power analytics, machine learning, and business intelligence across Sonatype. You’ll work closely with stakeholders across product, engineering, and business teams to ensure data is reliable, accessible, and actionable. This role is ideal for someone who thrives on solving complex data challenges at scale and enjoys building high-quality, maintainable systems. What You’ll Do Design, build, and maintain scalable data pipelines and ETL/ELT processes Architect and optimize data models and storage solutions for analytics and operational use Collaborate with data scientists, analysts, and engineers to deliver trusted, high-quality datasets Own and evolve parts of our data platform (e.g., Airflow, dbt, Spark, Redshift, or Snowflake) Implement observability, alerting, and data quality monitoring for critical pipelines Drive best practices in data engineering, including documentation, testing, and CI/CD Contribute to the design and evolution of our next-generation data lakehouse architecture Minimum Qualifications 5+ years of experience as a Data Engineer or in a similar backend engineering role Strong programming skills in Python, Scala, or Java Hands-on experience with HBase or similar NoSQL columnar stores Hands-on experience with distributed data systems like Spark, Kafka, or Flink Proficient in writing complex SQL and optimizing queries for performance Experience building and maintaining robust ETL/ELT (Data Warehousing) pipelines in production Familiarity with workflow orchestration tools (Airflow, Dagster, or similar) Understanding of data modeling techniques (star schema, dimensional modeling, etc.) Bonus Points Experience working with Databricks, dbt, Terraform, or Kubernetes Familiarity with streaming data pipelines or real-time processing Exposure to data governance frameworks and tools Experience supporting data products or ML pipelines in production Strong understanding of data privacy, security, and compliance best practices Why You’ll Love Working Here Data with purpose: Work on problems that directly impact how the world builds secure software Modern tooling: Leverage the best of open-source and cloud-native technologies Collaborative culture: Join a passionate team that values learning, autonomy, and impact At Sonatype, we value diversity and inclusivity. We offer perks such as parental leave, diversity and inclusion working groups, and flexible working practices to allow our employees to show up as their whole selves. We are an equal-opportunity employer, and we do not discriminate on the basis of race, religion, color, national origin, gender, sexual orientation, age, marital status, veteran status, or disability status. If you have a disability or special need that requires accommodation, please do not hesitate to let us know.
Posted 1 day ago
5.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Sonatype is the software supply chain security company. We provide the world’s best end-to-end software supply chain security solution, combining the only proactive protection against malicious open source, the only enterprise grade SBOM management and the leading open source dependency management platform. This empowers enterprises to create and maintain secure, quality, and innovative software at scale. As founders of Nexus Repository and stewards of Maven Central, the world’s largest repository of Java open-source software, we are software pioneers and our open source expertise is unmatched. We empower innovation with an unparalleled commitment to build faster, safer software and harness AI and data intelligence to mitigate risk, maximize efficiencies, and drive powerful software development. More than 2,000 organizations, including 70% of the Fortune 100 and 15 million software developers, rely on Sonatype to optimize their software supply chains. The Opportunity We’re looking for a Senior Data Engineer to join our growing Data Platform team. This role is a hybrid of data engineering and business intelligence, ideal for someone who enjoys solving complex data challenges while also building intuitive and actionable reporting solutions. You’ll play a key role in designing and scaling the infrastructure and pipelines that power analytics, dashboards, machine learning, and decision-making across Sonatype. You’ll also be responsible for delivering clear, compelling, and insightful business intelligence through tools like Looker Studio and advanced SQL queries. What You’ll Do Design, build, and maintain scalable data pipelines and ETL/ELT processes. Architect and optimize data models and storage solutions for analytics and operational use. Create and manage business intelligence reports and dashboards using tools like Looker Studio, Power BI, or similar. Collaborate with data scientists, analysts, and stakeholders to ensure datasets are reliable, meaningful, and actionable. Own and evolve parts of our data platform (e.g., Airflow, dbt, Spark, Redshift, or Snowflake). Write complex, high-performance SQL queries to support reporting and analytics needs. Implement observability, alerting, and data quality monitoring for critical pipelines. Drive best practices in data engineering and business intelligence, including documentation, testing, and CI/CD. Contribute to the evolution of our next-generation data lakehouse and BI architecture. What We’re Looking For 5+ years of experience as a Data Engineer or in a hybrid data/reporting role. Strong programming skills in Python, Java, or Scala. Proficiency with data tools such as Databricks, data modeling techniques (e.g., star schema, dimensional modeling), and data warehousing solutions like Snowflake or Redshift. Hands-on experience with modern data platforms and orchestration tools (e.g., Spark, Kafka, Airflow). Proficient in SQL with experience in writing and optimizing complex queries for BI and analytics. Experience with BI tools such as Looker Studio, Power BI, or Tableau. Experience in building and maintaining robust ETL/ELT pipelines in production. Understanding of data quality, observability, and governance best practices. 5+ years of experience as a Data Engineer or in a hybrid data/reporting role. Strong programming skills in Python, Java, or Scala. Hands-on experience with modern data platforms and orchestration tools (e.g., Spark, Kafka, Airflow). Proficient in SQL with experience in writing and optimizing complex queries for BI and analytics. Experience with BI tools such as Looker Studio, Power BI, or Tableau. Familiarity with data modeling techniques (star schema, dimensional modeling, etc.). Experience in building and maintaining robust ETL/ELT pipelines in production. Understanding data quality, observability, and governance best practices. Bonus Points Experience with dbt, Terraform, or Kubernetes. Familiarity with real-time data processing or streaming architectures. Understanding of data privacy, compliance, and security best practices in analytics and reporting. Why You’ll Love Working Here Data with purpose: Work on problems that directly impact how the world builds secure software. Full-spectrum impact: Use both engineering and analytical skills to shape product, strategy, and operations. Modern tooling: Leverage the best of open-source and cloud-native technologies. Collaborative culture: Join a passionate team that values learning, autonomy, and real-world impact. At Sonatype, we value diversity and inclusivity. We offer perks such as parental leave, diversity and inclusion working groups, and flexible working practices to allow our employees to show up as their whole selves. We are an equal-opportunity employer, and we do not discriminate on the basis of race, religion, color, national origin, gender, sexual orientation, age, marital status, veteran status, or disability status. If you have a disability or special need that requires accommodation, please do not hesitate to let us know.
Posted 1 day ago
6.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Sonatype is the software supply chain security company. We provide the world’s best end-to-end software supply chain security solution, combining the only proactive protection against malicious open source, the only enterprise grade SBOM management and the leading open source dependency management platform. This empowers enterprises to create and maintain secure, quality, and innovative software at scale. As founders of Nexus Repository and stewards of Maven Central, the world’s largest repository of Java open-source software, we are software pioneers and our open source expertise is unmatched. We empower innovation with an unparalleled commitment to build faster, safer software and harness AI and data intelligence to mitigate risk, maximize efficiencies, and drive powerful software development. More than 2,000 organizations, including 70% of the Fortune 100 and 15 million software developers, rely on Sonatype to optimize their software supply chains. About The Role The Engineering Manager – Data role at Sonatype blends hands-on data engineering with leadership and strategic influence. You will lead high-performing data engineering teams to build the infrastructure, pipelines, and systems that fuel analytics, business intelligence, and machine learning across our global products. We’re looking for a leader who brings deep technical experience in modern data platforms, is fluent in programming, and understands the nuances of open-source consumption and software supply chain security. This hybrid role is based out of our Hyderabad office. What You’ll Do Lead, mentor, and grow a team of data engineers responsible for building scalable, secure, and maintainable data solutions. Design and architect data pipelines, Lakehouse systems, and warehouse models using tools such as Databricks, Airflow, Spark, and Snowflake/Redshift. Stay hands-on—write, review, and guide production-level code in Python, Java, or similar languages. Ensure strong foundations in data modeling, governance, observability, and data quality. Collaborate with cross-functional teams including Product, Security, Engineering, and Data Science to translate business needs into data strategies and deliverables. Apply your knowledge of open-source component usage, dependency management, and software composition analysis to ensure our data platforms support secure development practices. Embed application security principles into data platform design, supporting Sonatype’s mission to secure the software supply chain. Foster an engineering culture that prioritizes continuous improvement, technical excellence, and team ownership. Who You Are A technical leader with a strong background in data engineering, platform design, and secure software development. Comfortable operating across domains—data infrastructure, programming, architecture, security, and team leadership. Passionate about delivering high-impact results through technical contributions, mentoring, and strategic thinking. Familiar with modern data engineering practices, open-source ecosystems, and the challenges of managing data securely on a scale. A collaborative communicator who thrives in hybrid and cross-functional team environments. What You Need 6+ years of experience in data engineering, backend systems, or infrastructure development. 2+ year of experience in a technical leadership or engineering management role with hands-on contribution. Expertise in data technologies: Databricks, Spark, Airflow, Snowflake/Redshift, dbt, etc. Strong programming skills in Python, Java, or Scala with experience building robust, production-grade systems. Experience in data modeling (dimensional modeling, star/snowflake schema), data warehousing, and ELT/ETL pipeline development. Understanding software dependency management and open-source consumption patterns. Familiarity with application security principles and a strong interest in secure software supply chains. Experience supporting real-time data systems or streaming architectures. Exposure to machine learning pipelines or data productization. Experience with tools like Terraform, Kubernetes, and CI/CD for data engineering workflows. Knowledge of data governance frameworks and regulatory compliance (GDPR, SOC2, etc.). Why Join Us? Help secure the software supply chain for millions of developers worldwide. Build meaningful software in a collaborative, fast-moving environment with strong technical peers. Stay hands-on while leading—technical leadership is part of the job, not separate from it. Join a global engineering organization with deep local roots and a strong team culture. Competitive salary, great benefits, and opportunities for growth and innovation. At Sonatype, we value diversity and inclusivity. We offer perks such as parental leave, diversity and inclusion working groups, and flexible working practices to allow our employees to show up as their whole selves. We are an equal-opportunity employer, and we do not discriminate on the basis of race, religion, color, national origin, gender, sexual orientation, age, marital status, veteran status, or disability status. If you have a disability or special need that requires accommodation, please do not hesitate to let us know.
Posted 1 day ago
4.0 years
0 Lacs
Pune, Maharashtra, India
On-site
The Role The Data Engineer is accountable for developing high quality data products to support the Bank’s regulatory requirements and data driven decision making. A Mantas Scenario Developer will serve as an example to other team members, work closely with customers, and remove or escalate roadblocks. By applying their knowledge of data architecture standards, data warehousing, data structures, and business intelligence they will contribute to business outcomes on an agile team. Responsibilities Developing and supporting scalable, extensible, and highly available data solutions Deliver on critical business priorities while ensuring alignment with the wider architectural vision Identify and help address potential risks in the data supply chain Follow and contribute to technical standards Design and develop analytical data models Required Qualifications & Work Experience First Class Degree in Engineering/Technology (4-year graduate course) 3 to 4 years’ experience implementing data-intensive solutions using agile methodologies Experience of relational databases and using SQL for data querying, transformation and manipulation Experience of modelling data for analytical consumers Hands on Mantas (Oracle FCCM) Scenario Development experience throughout the full development life cycle Ability to automate and streamline the build, test and deployment of data pipelines Experience in cloud native technologies and patterns A passion for learning new technologies, and a desire for personal growth, through self-study, formal classes, or on-the-job training Excellent communication and problem-solving skills T echnical Skills (Must Have) ETL: Hands on experience of building data pipelines. Proficiency in at least one of the data integration platforms such as Ab Initio, Apache Spark, Talend and Informatica Mantas: Expert in Oracle Mantas/FCCM, Scenario Manager, Scenario Development, thorough knowledge and hands on experience in Mantas FSDM, DIS, Batch Scenario Manager Big Data: Exposure to ‘big data’ platforms such as Hadoop, Hive or Snowflake for data storage and processing Data Warehousing & Database Management: Understanding of Data Warehousing concepts, Relational (Oracle, MSSQL, MySQL) and NoSQL (MongoDB, DynamoDB) database design Data Modeling & Design: Good exposure to data modeling techniques; design, optimization and maintenance of data models and data structures Languages: Proficient in one or more programming languages commonly used in data engineering such as Python, Java or Scala DevOps: Exposure to concepts and enablers - CI/CD platforms, version control, automated quality control management Technical Skills (Valuable) Ab Initio: Experience developing Co>Op graphs; ability to tune for performance. Demonstrable knowledge across full suite of Ab Initio toolsets e.g., GDE, Express>IT, Data Profiler and Conduct>IT, Control>Center, Continuous>Flows Cloud: Good exposure to public cloud data platforms such as S3, Snowflake, Redshift, Databricks, BigQuery, etc. Demonstrable understanding of underlying architectures and trade-offs Data Quality & Controls: Exposure to data validation, cleansing, enrichment and data controls Containerization: Fair understanding of containerization platforms like Docker, Kubernetes File Formats: Exposure in working on Event/File/Table Formats such as Avro, Parquet, Iceberg, Delta Others: Basics of Job scheduler like Autosys. Basics of Entitlement management Certification on any of the above topics would be an advantage. ------------------------------------------------------ Job Family Group: Technology ------------------------------------------------------ Job Family: Digital Software Engineering ------------------------------------------------------ Time Type: Full time ------------------------------------------------------ Most Relevant Skills Please see the requirements listed above. ------------------------------------------------------ Other Relevant Skills For complementary skills, please see above and/or contact the recruiter. ------------------------------------------------------ Citi is an equal opportunity employer, and qualified candidates will receive consideration without regard to their race, color, religion, sex, sexual orientation, gender identity, national origin, disability, status as a protected veteran, or any other characteristic protected by law. If you are a person with a disability and need a reasonable accommodation to use our search tools and/or apply for a career opportunity review Accessibility at Citi. View Citi’s EEO Policy Statement and the Know Your Rights poster.
Posted 1 day ago
0 years
0 Lacs
India
Remote
We Breathe Life Into Data At Komodo Health, our mission is to reduce the global burden of disease. And we believe that smarter use of data is essential to this mission. That’s why we built the Healthcare Map — the industry’s largest, most complete, precise view of the U.S. healthcare system — by combining de-identified, real-world patient data with innovative algorithms and decades of clinical experience. The Healthcare Map serves as our foundation for a powerful suite of software applications, helping us answer healthcare’s most complex questions for our partners. Across the healthcare ecosystem, we’re helping our clients unlock critical insights to track detailed patient behaviors and treatment patterns, identify gaps in care, address unmet patient needs, and reduce the global burden of disease. As we pursue these goals, it remains essential to us that we stay grounded in our values: be awesome, seek growth, deliver “wow,” and enjoy the ride. At Komodo, you will be joining a team of ambitious, supportive Dragons with diverse backgrounds but a shared passion to deliver on our mission to reduce the burden of disease — and enjoy the journey along the way. The Opportunity at Komodo Health Komodo aims to build the best healthcare data architecture in the industry. We are rapidly growing and expanding our infrastructure stack and adopting SRE best practices. You will be joining the Sentinel team. We build, operate, and constantly improve our infrastructure offerings for 150+ client organizations. The team achieves our business goals by adopting best infrastructure practices and leveraging 100% automation to scale our product offerings and reduce costs. You will work with passionate team members across the country, and to accomplish our goal: reduce the burden of disease. Looking back on your first 12 months at Komodo Health, you will have… Have developed a deep understanding of Sentinel and Sentinel’s customers Have built and operated cloud infrastructure (AWS) based on customer requirements Have solved any development and deployment challenges around making Sentinel’s infrastructure highly reliable, easy to maintain, and cost effective Have participated in the development, execution, and support of the new feature rollout with solution architects and customer success teams Have developed and contributed to existing and new monitoring and alerting systems for Sentinel infrastructure Have hardened infrastructure security including network, storage, user access etc. Have responded and solved key customer reported issues in a timely manner Participated in an on-call rotation What You Bring To Komodo Health Excited about automation, building scalable technical solutions, and being a team player Proficiency in at least one of the mainstream programming languages such as Python and/or Java, with deep technical troubleshooting skills Proficiency in Terraform, Scalr, and or other similar tools Experience with AWS’s core services and their relationships; ability to create solutions based on users’ high-level descriptions, learn new cloud technologies and use them as needed, etc. Hands-on experience in building CI/CD pipelines using GitActions, Jenkins, etc. Hands-on experience in networking, subnets, CIDR, etc. that are applicable to deploying applications and making sure they are accessible to our users across the globe Hands-on experience with scripting (bash and/or power shell) Knowledge of operating system basics (Linux and/or Windows) Knowledge of main cloud vendors and big-data tools and frameworks like Snowflake, Airflow, and/or Spark Working knowledge of data modeling, schema design, data storage with relational (e.g. PostgreSQL), NoSQL(e.g. DynamoDB, Redis), MPP databases (e.g. Snowflake, Redshift, BigQuery) Excellent cross-team communication and collaboration skills, with the ability to initiate and effectively drive projects proactively Additional skills and experience we’d prioritize (nice to have)… Experience with AWS preferred. AWS Cloud Infrastructure certification is a plus. Backend development experience such as building APIs and micro services using Python, Java, or any other mainstream programming languages Experience with data privacy concerns such as HIPAA or GDPR Experience working with cross-functional teams and with other customer-facing teams Passion! We hope you are passionate about our mission and technology Ownership! We hope you own your work, be accountable, and push it through the finish line. We hope you treat yourself as a cofounder and do not hesitate to share any idea that helps Komodo Expertise! We do not need you to know everything, but we hope you have deep knowledge in at least one area and can start contributing quickly. And we would love to learn from you in your area(s) of expertise as well Komodo's AI Standard At Komodo, we're not just witnessing the AI revolution – we're leading it. This is a pivotal moment in time, where being first to market with AI transforms industries and sets the bar. We've already established industry leadership in leveraging AI to revolutionize healthcare, and we expect every team member to contribute. AI here isn't optional; it's foundational. We expect you to integrate AI into your daily work – from summarizing documents to automating workflows and uncovering insights. This isn't just about efficiency; it's about making every moment more meaningful, building on trust in AI, and driving our collective success. Join us in shaping the future of healthcare intelligence. Where You’ll Work Komodo Health has a hybrid work model; we recognize the power of choice and importance of flexibility for the well-being of both our company and our individual Dragons. Roles may be completely remote based anywhere in the country listed, remote but based in a specific region, or local (commuting distance) to one of our hubs in San Francisco, New York City, or Chicago with remote work options. What We Offer Positions may be eligible for company benefits in accordance with Company policy. We offer a competitive total rewards package including medical, dental and vision coverage along with a broad range of supplemental benefits including 401k Retirement Plan, prepaid legal assistance, and more. We also offer paid time off for vacation, sickness, holiday, and bereavement. We are pleased to be able to provide 100% company-paid life insurance and long-term disability insurance. This information is intended to be a general overview and may be modified by the Company due to business-related factors. Equal Opportunity Statement Komodo Health provides equal employment opportunities to all applicants and employees. We prohibit discrimination and harassment of any type with regard to race, color, religion, age, sex, national origin, disability status, genetics, protected veteran status, sexual orientation, gender identity or expression, or any other characteristic protected by federal, state, or local laws.
Posted 1 day ago
6.0 - 9.0 years
0 Lacs
India
Remote
About Juniper Square Our mission is to unlock the full potential of private markets. Privately owned assets like commercial real estate, private equity, and venture capital make up half of our financial ecosystem yet remain inaccessible to most people. We are digitizing these markets, and as a result, bringing efficiency, transparency, and access to one of the most productive corners of our financial ecosystem. If you care about making the world a better place by making markets work better through technology – all while contributing as a member of a values-driven organization – we want to hear from you. Juniper Square offers employees a variety of ways to work, ranging from a fully remote experience to working full-time in one of our physical offices. We invest heavily in digital-first operations, allowing our teams to collaborate effectively across 27 U.S. states, 2 Canadian Provinces, India, Luxembourg, and England. We also have a physical offices in San Francisco, New York City, Mumbai and Bangalore for employees who prefer to work in an office some or all of the time. What You’ll Do Design and architect complex systems with the team, actively participating in design reviews. Lead and mentor a team of junior developers, fostering their growth and development. Ensure high quality in team deliverables through guidance, code reviews, and setting best practices. Collaborate with cross-functional partners (Product, UX, QA) to ensure the team meets project timelines. Own monitoring, diagnosing, and resolving production issues within BizOps systems. Contribute to large-scale, complex projects, and execute development tasks through completion. Perform code reviews to uphold high quality and standards across codebases. Provide technical support for stakeholder groups, including Customer Success. Work closely with QA to maintain software quality and increase automation coverage. Qualifications Bachelor's degree in Computer Science or equivalent work experience 6 to 9 years of experience building cloud-based web applications. Previous experience of leading a team will be a plus. Expertise in object-oriented programming (OOP) such as Python, Java or similar server-side web application development languages. Experience with front end technologies like React, CSS frameworks, HTML and Javascript Experience with SQL database schema design and query optimization Experience with Cloud technologies (AWS preferred) and Container technologies (Docker and k8s) Experience with Data warehousing technologies like RedShift or knowledge of time series DB is a plus. Experience with GraphQL, Apollo Server, and NestJS is a plus but not required You must be flexible and adaptable—you will be operating in a fast-paced startup environment At Juniper Square, we believe building a diverse workforce and an inclusive culture makes us a better company. If you think this job sounds like a fit, we encourage you to apply even if you don’t meet all the qualifications.
Posted 1 day ago
7.0 years
0 Lacs
India
On-site
At 3Pillar, our focus is on leveraging cutting-edge technologies that revolutionize industries by enabling data driven decision making. As a Senior Data Engineer, you will hold a crucial position within our dynamic team, actively contributing to thrilling projects that reshape data analytics for our clients, providing them with a competitive advantage in their respective Industries. if your passion for data analytics solutions that make a real-world impact, consider this your pass to the captivating world of Data Science and Engineering! 🔮🌐 Minimum Qualification Total IT Exp should be 7+ Years. Demonstrated expertise with a minimum of 5+ years of experience as data engineer or similar role Advanced SQL skills and experience with relational databases and database design. Experience working with cloud Data Warehouse solutions (e.g., Snowflake, Redshift, BigQuery, Azure Synapse, etc.). Strong Python skills with hands-on experience on Pandas, NumPy and other data related libraries Experience with Big Data technologies like Spark, Map Reduce, Hadoop, Hive etc Proficient in data pipeline and workflow management tools e.g. Airflow Experience with the AWS data engineering services Knowledge of AWS services viz., S3, Lambda, EMR, GLUE ETL, Athena, RDS, Redshift, EC2, IAM, AWS Kinesis Very good exposure of working on Data Lakes & Data Warehouses solutions Excellent problem-solving, communication, and organizational skills. Proven ability to work independently and with a team. Ability to guide other data engineers.
Posted 1 day ago
10.0 years
15 - 17 Lacs
India
Remote
Note: This is a remote role with occasional office visits. Candidates from Mumbai or Pune will be preferred About The Company A fast-growing enterprise technology consultancy operating at the intersection of cloud computing, big-data engineering and advanced analytics . The team builds high-throughput, real-time data platforms that power AI, BI and digital products for Fortune 500 clients across finance, retail and healthcare. By combining Databricks Lakehouse architecture with modern DevOps practices, they unlock insight at petabyte scale while meeting stringent security and performance SLAs. Role & Responsibilities Architect end-to-end data pipelines (ingestion → transformation → consumption) using Databricks, Spark and cloud object storage. Design scalable data warehouses/marts that enable self-service analytics and ML workloads. Translate logical data models into physical schemas; own database design, partitioning and lifecycle management for cost-efficient performance. Implement, automate and monitor ETL/ELT workflows, ensuring reliability, observability and robust error handling. Tune Spark jobs and SQL queries, optimizing cluster configurations and indexing strategies to achieve sub-second response times. Provide production support and continuous improvement for existing data assets, championing best practices and mentoring peers. Skills & Qualifications Must-Have 6–10 years building production-grade data platforms, including 3 years+ hands-on Apache Spark/Databricks experience. Expert proficiency in PySpark, Python and advanced SQL, with a track record of performance-tuning distributed jobs. Demonstrated ability to model data warehouses/marts and orchestrate ETL/ELT pipelines with tools such as Airflow or dbt. Hands-on with at least one major cloud platform (AWS or Azure) and modern lakehouse / data-lake patterns. Strong problem-solving skills, DevOps mindset and commitment to code quality; comfortable mentoring fellow engineers. Preferred Deep familiarity with the AWS analytics stack (Redshift, Glue, S3) or the broader Hadoop ecosystem. Bachelor’s or Master’s degree in Computer Science, Engineering or a related field. Experience building streaming pipelines (Kafka, Kinesis, Delta Live Tables) and real-time analytics solutions. Exposure to ML feature stores, MLOps workflows and data-governance/compliance frameworks. Relevant professional certifications (Databricks, AWS, Azure) or notable open-source contributions. Benefits & Culture Highlights Remote-first & flexible hours with 25+ PTO days and comprehensive health cover. Annual training budget & certification sponsorship (Databricks, AWS, Azure) to fuel continuous learning. Inclusive, impact-focused culture where engineers shape the technical roadmap and mentor a vibrant data community Skills: data modeling,big data technologies,team leadership,agile methodologies,performance tuning,data,aws,airflow
Posted 1 day ago
10.0 years
15 - 17 Lacs
India
Remote
Note: This is a remote role with occasional office visits. Candidates from Mumbai or Pune will be preferred About The Company A fast-growing enterprise technology consultancy operating at the intersection of cloud computing, big-data engineering and advanced analytics . The team builds high-throughput, real-time data platforms that power AI, BI and digital products for Fortune 500 clients across finance, retail and healthcare. By combining Databricks Lakehouse architecture with modern DevOps practices, they unlock insight at petabyte scale while meeting stringent security and performance SLAs. Role & Responsibilities Architect end-to-end data pipelines (ingestion → transformation → consumption) using Databricks, Spark and cloud object storage. Design scalable data warehouses/marts that enable self-service analytics and ML workloads. Translate logical data models into physical schemas; own database design, partitioning and lifecycle management for cost-efficient performance. Implement, automate and monitor ETL/ELT workflows, ensuring reliability, observability and robust error handling. Tune Spark jobs and SQL queries, optimizing cluster configurations and indexing strategies to achieve sub-second response times. Provide production support and continuous improvement for existing data assets, championing best practices and mentoring peers. Skills & Qualifications Must-Have 6–10 years building production-grade data platforms, including 3 years+ hands-on Apache Spark/Databricks experience. Expert proficiency in PySpark, Python and advanced SQL, with a track record of performance-tuning distributed jobs. Demonstrated ability to model data warehouses/marts and orchestrate ETL/ELT pipelines with tools such as Airflow or dbt. Hands-on with at least one major cloud platform (AWS or Azure) and modern lakehouse / data-lake patterns. Strong problem-solving skills, DevOps mindset and commitment to code quality; comfortable mentoring fellow engineers. Preferred Deep familiarity with the AWS analytics stack (Redshift, Glue, S3) or the broader Hadoop ecosystem. Bachelor’s or Master’s degree in Computer Science, Engineering or a related field. Experience building streaming pipelines (Kafka, Kinesis, Delta Live Tables) and real-time analytics solutions. Exposure to ML feature stores, MLOps workflows and data-governance/compliance frameworks. Relevant professional certifications (Databricks, AWS, Azure) or notable open-source contributions. Benefits & Culture Highlights Remote-first & flexible hours with 25+ PTO days and comprehensive health cover. Annual training budget & certification sponsorship (Databricks, AWS, Azure) to fuel continuous learning. Inclusive, impact-focused culture where engineers shape the technical roadmap and mentor a vibrant data community Skills: data modeling,big data technologies,team leadership,agile methodologies,performance tuning,data,aws,airflow
Posted 1 day ago
10.0 years
15 - 17 Lacs
India
Remote
Note: This is a remote role with occasional office visits. Candidates from Mumbai or Pune will be preferred About The Company A fast-growing enterprise technology consultancy operating at the intersection of cloud computing, big-data engineering and advanced analytics . The team builds high-throughput, real-time data platforms that power AI, BI and digital products for Fortune 500 clients across finance, retail and healthcare. By combining Databricks Lakehouse architecture with modern DevOps practices, they unlock insight at petabyte scale while meeting stringent security and performance SLAs. Role & Responsibilities Architect end-to-end data pipelines (ingestion → transformation → consumption) using Databricks, Spark and cloud object storage. Design scalable data warehouses/marts that enable self-service analytics and ML workloads. Translate logical data models into physical schemas; own database design, partitioning and lifecycle management for cost-efficient performance. Implement, automate and monitor ETL/ELT workflows, ensuring reliability, observability and robust error handling. Tune Spark jobs and SQL queries, optimizing cluster configurations and indexing strategies to achieve sub-second response times. Provide production support and continuous improvement for existing data assets, championing best practices and mentoring peers. Skills & Qualifications Must-Have 6–10 years building production-grade data platforms, including 3 years+ hands-on Apache Spark/Databricks experience. Expert proficiency in PySpark, Python and advanced SQL, with a track record of performance-tuning distributed jobs. Demonstrated ability to model data warehouses/marts and orchestrate ETL/ELT pipelines with tools such as Airflow or dbt. Hands-on with at least one major cloud platform (AWS or Azure) and modern lakehouse / data-lake patterns. Strong problem-solving skills, DevOps mindset and commitment to code quality; comfortable mentoring fellow engineers. Preferred Deep familiarity with the AWS analytics stack (Redshift, Glue, S3) or the broader Hadoop ecosystem. Bachelor’s or Master’s degree in Computer Science, Engineering or a related field. Experience building streaming pipelines (Kafka, Kinesis, Delta Live Tables) and real-time analytics solutions. Exposure to ML feature stores, MLOps workflows and data-governance/compliance frameworks. Relevant professional certifications (Databricks, AWS, Azure) or notable open-source contributions. Benefits & Culture Highlights Remote-first & flexible hours with 25+ PTO days and comprehensive health cover. Annual training budget & certification sponsorship (Databricks, AWS, Azure) to fuel continuous learning. Inclusive, impact-focused culture where engineers shape the technical roadmap and mentor a vibrant data community Skills: data modeling,big data technologies,team leadership,agile methodologies,performance tuning,data,aws,airflow
Posted 1 day ago
10.0 years
15 - 17 Lacs
India
Remote
Note: This is a remote role with occasional office visits. Candidates from Mumbai or Pune will be preferred About The Company A fast-growing enterprise technology consultancy operating at the intersection of cloud computing, big-data engineering and advanced analytics . The team builds high-throughput, real-time data platforms that power AI, BI and digital products for Fortune 500 clients across finance, retail and healthcare. By combining Databricks Lakehouse architecture with modern DevOps practices, they unlock insight at petabyte scale while meeting stringent security and performance SLAs. Role & Responsibilities Architect end-to-end data pipelines (ingestion → transformation → consumption) using Databricks, Spark and cloud object storage. Design scalable data warehouses/marts that enable self-service analytics and ML workloads. Translate logical data models into physical schemas; own database design, partitioning and lifecycle management for cost-efficient performance. Implement, automate and monitor ETL/ELT workflows, ensuring reliability, observability and robust error handling. Tune Spark jobs and SQL queries, optimizing cluster configurations and indexing strategies to achieve sub-second response times. Provide production support and continuous improvement for existing data assets, championing best practices and mentoring peers. Skills & Qualifications Must-Have 6–10 years building production-grade data platforms, including 3 years+ hands-on Apache Spark/Databricks experience. Expert proficiency in PySpark, Python and advanced SQL, with a track record of performance-tuning distributed jobs. Demonstrated ability to model data warehouses/marts and orchestrate ETL/ELT pipelines with tools such as Airflow or dbt. Hands-on with at least one major cloud platform (AWS or Azure) and modern lakehouse / data-lake patterns. Strong problem-solving skills, DevOps mindset and commitment to code quality; comfortable mentoring fellow engineers. Preferred Deep familiarity with the AWS analytics stack (Redshift, Glue, S3) or the broader Hadoop ecosystem. Bachelor’s or Master’s degree in Computer Science, Engineering or a related field. Experience building streaming pipelines (Kafka, Kinesis, Delta Live Tables) and real-time analytics solutions. Exposure to ML feature stores, MLOps workflows and data-governance/compliance frameworks. Relevant professional certifications (Databricks, AWS, Azure) or notable open-source contributions. Benefits & Culture Highlights Remote-first & flexible hours with 25+ PTO days and comprehensive health cover. Annual training budget & certification sponsorship (Databricks, AWS, Azure) to fuel continuous learning. Inclusive, impact-focused culture where engineers shape the technical roadmap and mentor a vibrant data community Skills: data modeling,big data technologies,team leadership,agile methodologies,performance tuning,data,aws,airflow
Posted 1 day ago
10.0 years
15 - 17 Lacs
India
Remote
Note: This is a remote role with occasional office visits. Candidates from Mumbai or Pune will be preferred About The Company A fast-growing enterprise technology consultancy operating at the intersection of cloud computing, big-data engineering and advanced analytics . The team builds high-throughput, real-time data platforms that power AI, BI and digital products for Fortune 500 clients across finance, retail and healthcare. By combining Databricks Lakehouse architecture with modern DevOps practices, they unlock insight at petabyte scale while meeting stringent security and performance SLAs. Role & Responsibilities Architect end-to-end data pipelines (ingestion → transformation → consumption) using Databricks, Spark and cloud object storage. Design scalable data warehouses/marts that enable self-service analytics and ML workloads. Translate logical data models into physical schemas; own database design, partitioning and lifecycle management for cost-efficient performance. Implement, automate and monitor ETL/ELT workflows, ensuring reliability, observability and robust error handling. Tune Spark jobs and SQL queries, optimizing cluster configurations and indexing strategies to achieve sub-second response times. Provide production support and continuous improvement for existing data assets, championing best practices and mentoring peers. Skills & Qualifications Must-Have 6–10 years building production-grade data platforms, including 3 years+ hands-on Apache Spark/Databricks experience. Expert proficiency in PySpark, Python and advanced SQL, with a track record of performance-tuning distributed jobs. Demonstrated ability to model data warehouses/marts and orchestrate ETL/ELT pipelines with tools such as Airflow or dbt. Hands-on with at least one major cloud platform (AWS or Azure) and modern lakehouse / data-lake patterns. Strong problem-solving skills, DevOps mindset and commitment to code quality; comfortable mentoring fellow engineers. Preferred Deep familiarity with the AWS analytics stack (Redshift, Glue, S3) or the broader Hadoop ecosystem. Bachelor’s or Master’s degree in Computer Science, Engineering or a related field. Experience building streaming pipelines (Kafka, Kinesis, Delta Live Tables) and real-time analytics solutions. Exposure to ML feature stores, MLOps workflows and data-governance/compliance frameworks. Relevant professional certifications (Databricks, AWS, Azure) or notable open-source contributions. Benefits & Culture Highlights Remote-first & flexible hours with 25+ PTO days and comprehensive health cover. Annual training budget & certification sponsorship (Databricks, AWS, Azure) to fuel continuous learning. Inclusive, impact-focused culture where engineers shape the technical roadmap and mentor a vibrant data community Skills: data modeling,big data technologies,team leadership,agile methodologies,performance tuning,data,aws,airflow
Posted 1 day ago
0 years
0 Lacs
Andhra Pradesh, India
On-site
P2-C3-STS Experience with AWS CDK development using TypeScript or Python. Experience with AWS services like s3, Redshift, RDS, EC2, Glue, CloudFormation etc.., Experience with lambda development using python Experience with access management using IAM roles. Experience with code management using GitHub Experience with Data pipeline orchestration using Apache Airflow. Experience with monitoring tools like CloudWatch Fluent in programming languages such as SQL, Python and TypeScript
Posted 1 day ago
3.0 - 8.0 years
10 - 20 Lacs
Pune
Remote
Role: Retool Developer (Data Engineer) Location: Remote (Anywhere in India only) Shift: US - CST Time Department: Data Engineering Job Summary: We are looking for a highly skilled and experienced Retool Expert to join our team. In this role, you will be responsible for designing, developing, and maintaining internal tools and dashboards using the Retool platform. You will work closely with various teams to understand their needs and build effective solutions that improve operational efficiency and data visibility. Job Responsibilities: 1. Design and build custom internal tools, dashboards, and applications using Retool to meet specific business requirements. 2. Connect Retool applications to various data sources, including SQL databases, Realtime Queues, Data Lakes, and APIs. 3. Write and optimize SQL queries to retrieve, manipulate, and present data effectively within Retool. 4. Utilize basic JavaScript to enhance Retool application functionality, create custom logic, and interact with data. 5. Develop interactive data visualizations and reports within Retool to provide clear insights and support data-driven decision-making. 6. Collaborate with business stakeholders and other technical teams to gather requirements, provide technical guidance, and ensure solutions align with business goals. 7. Troubleshoot, debug, and optimize Retool applications for performance and scalability. 8. Maintain clear documentation of Retool applications, including design, data connections, and logic. 9. Stay up-to-date with the latest Retool features and best practices to continually improve our internal tools. Qualifications: 1. Strong proficiency in SQL for data querying, manipulation, and database management. 2. Solid understanding of basic JavaScript for scripting, custom logic, and enhancing user experience within Retool. 3. Demonstrated expertise in data visualization, including the ability to create clear, insightful, and user-friendly charts and graphs. 4. Ability to translate business needs into technical solutions using Retool. 5. Excellent problem-solving skills and attention to detail. 6. Strong communication and collaboration skills to work effectively with technical and non-technical teams. Preferred Qualifications (Bonus Points): 1. Experience with other low-code/no-code platforms. 2. Familiarity with UI/UX principles for building intuitive interfaces. Why Join Atidiv? 100% Remote | Flexible Work Culture Opportunity to work with cutting-edge technologies Collaborative, supportive team that values innovation and ownership Work on high-impact, global projects
Posted 1 day ago
0.0 - 1.0 years
5 - 9 Lacs
Kolkata
Work from Office
Key Responsibilities. Collaborate with data scientists to support end-to-end ML model development, including data preparation, feature engineering, training, and evaluation.. Build and maintain automated pipelines for data ingestion, transformation, and model scoring using Python and SQL.. Assist in model deployment using CI/CD pipelines (e.g., Jenkins) and ensure smooth integration with production systems.. Develop tools and scripts to support model monitoring, logging, and retraining workflows.. Work with data from relational databases (RDS, Redshift) and preprocess it for model consumption.. Analyze pipeline performance and model behavior; identify opportunities for optimization and refactoring.. Contribute to the development of a feature store and standardized processes to support reproducible data science.. Required Skills & Experience. 1-3 years of hands-on experience in Python programming for data science or ML engineering tasks.. Solid understanding of machine learning workflows, including model training, validation, deployment, and monitoring.. Proficient in SQL and working with structured data from sources like Redshift, RDS, etc.. Familiarity with ETL pipelines and data transformation best practices.. Basic understanding of ML model deployment strategies and CI/CD tools like Jenkins.. Strong analytical mindset with the ability to interpret and debug data/model issues.. Preferred Qualifications. Exposure to frameworks like scikit-learn, XGBoost, LightGBM, or similar.. Knowledge of ML lifecycle tools (e.g., MLflow, Ray).. Familiarity with cloud platforms (AWS preferred) and scalable infrastructure..
Posted 1 day ago
5.0 - 8.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Key Responsibilities: Consult with clients to understand their business goals and data challenges Design and implement data collection, cleansing, and transformation processes Analyze structured and unstructured data using tools like SQL, Python, or R Create dashboards, reports, and visualizations using tools like Power BI, Tableau, or Looker Provide strategic recommendations based on data analysis and business objectives Work with cross-functional teams (e.g., IT, Marketing, Finance) to align data strategies Ensure data quality, integrity, and compliance with relevant data governance standards Document findings, processes, and methodologies for stakeholders Requirements: Bachelor’s degree in Data Science, Computer Science, Statistics, Business Analytics, or related field 5-8 years of experience in data analytics, business intelligence, or data consulting Proficiency in SQL and at least one programming language (e.g., Python, R) Experience with data visualization tools (e.g., Tableau, Power BI, Qlik) Strong problem-solving skills and business acumen Excellent communication skills to present complex data insights to non-technical audiences Knowledge of cloud platforms (AWS, Azure, or GCP) is a plus Preferred Qualifications: Experience with data warehousing (e.g., Snowflake, Redshift, BigQuery) Knowledge of machine learning concepts or tools Experience working in a client-facing consulting role Certifications in data analytics or cloud platforms (e.g., Google Data Engineer, Microsoft Azure Data Scientist)
Posted 1 day ago
6.0 years
0 Lacs
India
Remote
About BeGig BeGig is the leading tech freelancing marketplace. We empower innovative, early-stage, non-tech founders to bring their visions to life by connecting them with top-tier freelance talent. By joining BeGig, you’re not just taking on one role—you’re signing up for a platform that will continuously match you with high-impact opportunities tailored to your expertise. Your Opportunity Join our network as a Data Analyst and collaborate with forward-thinking startups to transform raw data into actionable insights. You’ll help drive data-informed decisions through deep analysis, visualization, and reporting—impacting product strategies, customer experience, and business growth. Enjoy the flexibility to work remotely, set your own hours, and take on projects aligned with your domain and interests. Role Overview As a Data Analyst , you will: Analyze Data for Insights: Collect, clean, and analyze structured and unstructured data to identify patterns and trends. Visualize & Communicate Results: Create dashboards and visual reports to support business decisions. Collaborate Across Teams: Work with product, engineering, marketing, and leadership teams to define KPIs and improve processes using data. What You’ll Do 📊 Data Analysis & Reporting Interpret complex datasets to extract meaningful business insights. Conduct exploratory and statistical data analysis. Build and maintain dashboards using tools like Power BI, Tableau, or Looker. 📂 Data Management & Quality Collect, clean, and validate data from multiple sources. Perform data audits to ensure accuracy and completeness. Automate reports and build repeatable data processes. 🤝 Stakeholder Collaboration Translate business questions into analytical solutions. Present findings clearly and confidently to both technical and non-technical stakeholders. Partner with data engineers and developers to improve data pipelines and accessibility. Technical Requirements & Skills Experience: 3–6 years in a Data Analyst or similar role. Tools: Proficient in SQL, Excel, and at least one BI tool (Tableau, Power BI, Looker). Languages: Experience with Python or R for data analysis (preferred). Data Handling: Strong understanding of databases, joins, indexing, and data cleaning. Bonus: Familiarity with data warehousing (e.g., Snowflake, Redshift), A/B testing, or Google Analytics. What We’re Looking For A sharp analytical thinker with a passion for making data useful. A self-starter comfortable working independently and managing multiple projects. A team player who communicates clearly and thrives in collaborative environments. A freelancer excited by innovation and fast-paced problem solving. Why Join Us? 🚀 Immediate Impact: Work on high-visibility projects driving business decisions. 🌍 Remote & Flexible: Set your own schedule and work style. 🔁 Ongoing Projects: Get matched to future gigs that align with your skills and goals. 💡 Innovative Work: Collaborate with startups building the future of data-driven products. Ready to turn data into impact? Apply now to join BeGig as a Data Analyst and be part of a curated network driving innovation across industries.
Posted 1 day ago
6.0 years
0 Lacs
Pune, Maharashtra, India
On-site
🔍 Hiring: Big Data Engineer (AWS + SQL Expertise) 📍 Locations: Chennai (Primary), Gurugram, Pune 💼 Experience Level: A – 6 years | AC – 8 years ✅ Key Skills Required (Must Have): Cloud: AWS Big Data Stack – S3, Glue, Athena, EMR Programming: Python, Spark, SQL, Mulesoft, Talend, dbt Data Warehousing & ETL: Redshift / Snowflake, ETL/ELT pipeline development Data Handling: Structured & semi-structured data transformation Process & Optimization: Data ingestion, transformation, performance tuning ✅ Preferred Skills (Good to Have): AWS Data Engineer Certification Experience with Spark, Hive, Kafka, Kinesis, Airflow Familiarity with ServiceNow, Jira (ITSM tools) Data modeling experience 🎯 Key Responsibilities: Build scalable data pipelines (ETL/ELT) for diverse data sources Clean, validate, and transform data for business consumption Develop data model objects (views, tables) for downstream usage Optimize storage and query performance in AWS environments Collaborate with business & technical stakeholders Support code reviews, documentation, and incident resolution. 🎓 Qualifications: Bachelor's in Computer Science, IT, or related field (or equivalent experience) Strong problem-solving, communication, and documentation skills
Posted 1 day ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
20312 Jobs | Dublin
Wipro
11977 Jobs | Bengaluru
EY
8165 Jobs | London
Accenture in India
6667 Jobs | Dublin 2
Uplers
6462 Jobs | Ahmedabad
Amazon
6351 Jobs | Seattle,WA
Oracle
5993 Jobs | Redwood City
IBM
5803 Jobs | Armonk
Capgemini
3897 Jobs | Paris,France
Tata Consultancy Services
3776 Jobs | Thane