Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
0 years
0 Lacs
Pune, Maharashtra, India
On-site
Key Responsibilities • ARCHITECTURE AND DESIGN FOR DATA ENGINEERING AND MACHINE LEARNING PROJECTS Establishing architecture and target design for data engineering and machine learning projects. • REQUIREMENT ANALYSIS, PLANNING, EFFORT AND RESOURCE NEEDS ESTIMATION Current inventory analysis, review and formalize requirements, project planning and execution plan. • ADVISORY SERVICES AND BEST PRACTICES Troubleshooting, Performance Tuning, Cost Optimization, Operational Runbooks and Mentoring • LARGE MIGRATIONS Assist customers with large migrations to Databricks from Hadoop ecosystems, Data Warehouses (Teradata, DataStage, Netezza, Ab Initio), ETL engines (Informatica), SAS, SQL, DW, Cloud-based Data platforms like Redshift, Snowflake, EMR, etc • DESIGN, BUILD AND OPTIMIZE DATA PIPELINES The Databricks implementation will be best in class, with flexibility for future iterations. • PRODUCTION READINESS Assisting with production readiness for customers, including exception handling, production cutover, capture analysis, alert scheduling and monitoring • MACHINE LEARNING (ML) – MODEL REVIEW, TUNING, ML OPERATIONS AND OPTIMIZATION Build and review ML models, ML best practices, model lifecycle, ML frameworks and deploying of models in production. Must Have: ▪ Pre- Sales experience is a must. ▪ Hands on experience with distributed computing framework like DataBricks, Spark Ecosystem (Spark Core, PySpark, Spark Streaming, SparkSQL) ▪ Willing to work with product teams to best optimize product features/functions. ▪ Experience on Batch workloads and real time streaming with high volume data frequency. ▪ Performance optimization on Spark workloads ▪ Environment setup, user management, Authentication and cluster management on Databricks ▪ Professional curiosity and the ability to enable yourself in new technologies and tasks. ▪ Good understanding of SQL and a good grasp of relational and analytical database management theory and practice. Key Skills: • Python, SQL and Pyspark • Big Data Ecosystem (Hadoop, Hive, Sqoop, HDFS, Hbase) • Spark Ecosystem (Spark Core, Spark Streaming, Spark SQL) / Databricks • Azure (ADF, ADB, Logic Apps, Azure SQL database, Azure Key Vaults, ADLS, Synapse) • AWS (Lambda,AWS Glue, S3, Redshift) • Data Modelling, ETL Methodology
Posted 6 days ago
0 years
0 Lacs
Khairatabad, Telangana, India
On-site
Location: IN - Hyderabad Telangana Goodyear Talent Acquisition Representative: Maria Monica Canding Sponsorship Available: No Relocation Assistance Available: No Job Responsibilities You are responsible for designing and building data products, legal data layers, data streams, algorithms, and reporting systems (e.g., dashboards, front ends). You ensure the correct design of solutions, performance, and scalability while considering appropriate cost control. You link data product design with DevOps and infrastructure. You act as a reference within and outside the Analytics team. You serve as a technical partner to Data Engineers regarding digital product implementation. Qualifications You have a Bachelor’s degree in Computer Science, Engineering, Management Information Systems, or a related discipline, or you have 10 or more years of experience in Information Technology in lieu of a degree. You have 5 or more years of experience in Information Technology. You have an in-depth understanding of database structure principles. You have experience gathering and analyzing system requirements. You have knowledge of data mining and segmentation techniques. You have expertise in SQL and Oracle. You are familiar with data visualization tools (e.g., Tableau, Cognos, SAP Analytics Cloud). You possess proven analytical skills and a problem-solving attitude. You have a proven ability to work with distributed systems. You are able to develop creative solutions to problems. You have knowledge and strong skills with SQL and NoSQL databases and applications, such as Teradata, Redshift, MongoDB, or equivalent. Goodyear is an Equal Employment Opportunity and Affirmative Action Employer. All qualified applicants will receive consideration for employment without regard to that individual's race, color, religion or creed, national origin or ancestry, sex (including pregnancy), sexual orientation, gender identity, age, physical or mental disability, ethnicity, citizenship, or any other characteristic protected by law. Goodyear is one of the world’s largest tire companies. It employs about 74,000 people and manufactures its products in 57 facilities in 23 countries around the world. Its two Innovation Centers in Akron, Ohio and Colmar-Berg, Luxembourg strive to develop state-of-the-art products and services that set the technology and performance standard for the industry. For more information about Goodyear and its products, go to www.goodyear.com/corporate
Posted 6 days ago
12.0 years
0 Lacs
Kolkata, West Bengal, India
On-site
Primary Skills : AWS Glue, Glue catalogue AWS Redshift/Redshift Spectrum Managed Airflow/Step Function AWS S3 Tables/Iceberg/Athena PySpark Good to have: AWS Appflow AWS Lambda/Fargate/EKS CDK/Cloudformation/Terraform AWS Datazone or DataZone Business Catalogue else Great Expectations AWS DMS/DataSync/SCT tool/3rd party Ingestion tool AWS MSK/Kinesis/Kinesis Firehose Qualifications: Bachelor's and/or master’s degree in computer science or equivalent experience. Comfortable working in a dynamic, fast-paced, innovative environment with several ongoing concurrent projects Should have experience working in Agile methodology Strong verbal and written communication skills. Strong analytical and problem-solving skills with high attention to detail. About Us: Tredence is a global data science solutions provider founded in 2013 by Shub Bhowmick, Sumit Mehra, and Shashank Dubey focused on solving the last-mile problem in AI. Headquartered in San Jose, California, the company embraces a vertical-first approach and an outcome-driven mindset to help clients win and accelerate value realization from their analytics investments. The aim is to bridge the gap between insight delivery and value realization by providing customers with a differentiated approach to data and analytics through tailor-made solutions. Tredence is 2500-plus employees strong with offices in San Jose, Foster City, Chicago, London, Toronto, and Bangalore, with the largest companies in retail, CPG, hi-tech, telecom, healthcare, travel, and industrials as clients. We’re focused on bridging the gap between insights delivery and value realization, partnering with our clients to help them cross the last mile gap in analytics. This DNA pushes our ecosystem to go beyond possible, helping our clients re-think their limitations and overcome their norms of ‘possible’. As we complete 12 years of Tredence this year, we are on the cusp of an ambitious and exciting phase of expansion and growth. Tredence recently closed a USD 175 million Series B funding , which will help us build on growth momentum, strengthen vertical capabilities, and reach a broader customer base. 3,500 data scientists - Every person at Tredence brings their whole self to work and helps us create a culture of curiosity and innovation 10+ global delivery centres - Enabling user-driven analytics for some of the world’s largest Fortune 50 companies 75+ global customers - Trust us to help them create sustainable impact by systematizing the last mile adoption of analytics Unprecedented operational excellence- with AI and self-service to solve complex problems across the enterprise value chain One enterprise AI studio- offering a co-innovation platform and market place for AI and advanced analytics solutions to move innovation from experimentation to value realization Acquired Further Advisory to Accelerate Banking & Financial Services Transformation Was Awarded 2025 Google Cloud Industry Solutions Partner of the Year for CPG At Tredence, nothing is impossible; we believe in pushing ourselves to limitless possibilities and staying true to our tagline, Beyond Possible .
Posted 6 days ago
0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Cloud and AWS Expertise: In-depth knowledge of AWS services related to data engineering: EC2, S3, RDS, DynamoDB, Redshift, Glue, Lambda, Step Functions, Kinesis, Iceberg, EMR, and Athena. Strong understanding of cloud architecture and best practices for high availability and fault tolerance. Data Engineering Concepts : Expertise in ETL/ELT processes, data modeling, and data warehousing. Knowledge of data lakes, data warehouses, and big data processing frameworks like Apache Hadoop and Spark. Proficiency in handling structured and unstructured data. Programming and Scripting: Proficiency in Python, Pyspark and SQL for data manipulation and pipeline development. Expertise in working with data warehousing solutions like Redshift.
Posted 6 days ago
0.0 years
0 Lacs
Kolkata, West Bengal
On-site
Kolkata,West Bengal,India Job ID 768921 Join our Team About this opportunity: We are seeking a highly motivated and skilled Data Engineer to join our cross-functional team of Data Architects and Data Scientists. This role offers an exciting opportunity to work on large-scale data infrastructure and AI/ML pipelines, driving intelligent insights and scalable solutions across the organization. What you will do: Build, optimize, and maintain robust ETL/ELT pipelines to support AI/ML and analytics workloads. Collaborate closely with Data Scientists to productionize ML models, ensuring scalable deployment and monitoring. Design and implement cloud-based data lake and data warehouse architectures. Ensure high data quality, governance, security, and observability across data platforms. Develop and manage real-time and batch data workflows using tools like Apache Spark, Airflow, and Kafka. Support CI/CD and MLOps workflows using tools like GitHub Actions, Docker, Kubernetes, and MLflow. The skills you bring: Languages: Python, SQL, Bash Data Tools: Apache Spark, Airflow, Kafka, dbt, Pandas Cloud Platforms: AWS (preferred), Azure, or GCP Databases: Snowflake, Redshift, BigQuery, PostgreSQL, NoSQL (MongoDB/DynamoDB) DevOps/MLOps: Docker, Kubernetes, MLflow, CI/CD (e.g., GitHub Actions, Jenkins) Data Modeling: OLAP/OLTP, Star/Snowflake schema, Data Vault Why join Ericsson? At Ericsson, you´ll have an outstanding opportunity. The chance to use your skills and imagination to push the boundaries of what´s possible. To build solutions never seen before to some of the world’s toughest problems. You´ll be challenged, but you won’t be alone. You´ll be joining a team of diverse innovators, all driven to go beyond the status quo to craft what comes next. What happens once you apply?
Posted 6 days ago
4.0 years
0 Lacs
Pune, Maharashtra
Remote
Job Description Sr. Data Platform Engineer Our Enterprise Data & Analytics (EDA) is looking for an experienced Senior Data Platform Engineer to join our growing Platform engineering team. You’ll work in a collaborative Agile environment using the latest engineering best practices with involvement in all aspects of the software development lifecycle. As a Senior Data Platform Engineer, you will be responsible for building and maintaining many key parts of the Zendesk Data Platform including next generation reporting and analytics. Working closely with your team members to craft, develop and deliver reporting products for our customers and high quality software projects on time. Data is at the heart of Zendesk’s business! This is an autonomous role that can have a huge impact across all of the Zendesk product family! What you get to do every single day: Design, develop and maintain scalable and efficient data infrastructure components, including data pipelines, storage solutions and data processing frameworks Build and manage integrations with various internal and external data sources via ETL solutions. Design, implement, and maintain CI/CD pipelines using DevOps tools like Terraform & Github Actions for automated build, test, and deployment processes Build high-quality, clean, scalable and reusable code by enforcing best practices around software engineering architecture and processes (Code Reviews, Unit testing, etc.) Collaborate with team members on researching and brainstorming different solutions for technical challenges we face Continually improve data pipelines for high efficiency, throughput and quality of data Investigate production issues and fine-tune our data pipelines Build and Promote best engineering practices in areas of version control system, CI/CD, code review, pair programming Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery Stay up-to-date with the latest technologies and industry trends, and proactively recommend improvements to our data platform. What you bring to the role: Basic Qualifications 4+ years of data engineering experience building, working & maintaining scalable data infrastructure (data pipelines & ETL processes on big data environments) 2+ years of experience with Cloud columnar databases (Snowflake Proven experience as a CI/CD Engineer or DevOps Engineer, with a focus on data platforms and analytics (Terraform, Docker, Github Actions) Experience with Cloud Platform (AWS, Google Cloud) Proficiency in query authoring (SQL) and data processing (batch and streaming) Intermediate experience with any of the programming language: Python, Go, Java, Scala, we use primarily Python Experience with ETL schedulers such as Apache Airflow, AWS Glue or similar frameworks Developer skills; demonstrating a strong passion to design scalable and fault-tolerant software systems Integration with 3rd party API SaaS applications like Salesforce, Zuora, etc Ensure data integrity and accuracy by conducting regular data audits, identifying and resolving data quality issues, and implementing data governance best practices. Experience performing root cause analysis on internal and external data and processes to answer specific business questions and identify opportunities for improvement. Ability to work closely with data scientists, analysts, and other stakeholders to translate business requirements into technical solutions. Strong documentation skills for pipeline design and data flow diagrams. Preferred Qualifications Extensive experience with Snowflake or similar cloud warehouses (Bigquery, Redshift) Familiarity with infrastructure as code principles and tools (e.g., Terraform, CloudFormation, Github Actions) Experience with version control systems (e.g., Git) and CI/CD best practices for software development to bring automation Expert knowledge in python Familiarity with Airflow, Fivetran, Hightouch What does our data stack looks like: ELT (Fivetran, Bigquery, Snowflake, Fivetran, dbt, Airflow) BI (Tableau, Looker) Infrastructure (GCP, AWS, Kubernetes, Terraform, Github Actions) Please note that Zendesk can only hire candidates who are physically located and plan to work from Karnataka or Maharashtra. Please refer to the location posted on the requisition for where this role is based. Hybrid: In this role, our hybrid experience is designed at the team level to give you a rich onsite experience packed with connection, collaboration, learning, and celebration - while also giving you flexibility to work remotely for part of the week. This role must attend our local office for part of the week. The specific in-office schedule is to be determined by the hiring manager. The intelligent heart of customer experience Zendesk software was built to bring a sense of calm to the chaotic world of customer service. Today we power billions of conversations with brands you know and love. Zendesk believes in offering our people a fulfilling and inclusive experience. Our hybrid way of working, enables us to purposefully come together in person, at one of our many Zendesk offices around the world, to connect, collaborate and learn whilst also giving our people the flexibility to work remotely for part of the week. Zendesk is an equal opportunity employer, and we’re proud of our ongoing efforts to foster global diversity, equity, & inclusion in the workplace. Individuals seeking employment and employees at Zendesk are considered without regard to race, color, religion, national origin, age, sex, gender, gender identity, gender expression, sexual orientation, marital status, medical condition, ancestry, disability, military or veteran status, or any other characteristic protected by applicable law. We are an AA/EEO/Veterans/Disabled employer. If you are based in the United States and would like more information about your EEO rights under the law, please click here . Zendesk endeavors to make reasonable accommodations for applicants with disabilities and disabled veterans pursuant to applicable federal and state law. If you are an individual with a disability and require a reasonable accommodation to submit this application, complete any pre-employment testing, or otherwise participate in the employee selection process, please send an e-mail to peopleandplaces@zendesk.com with your specific accommodation request.
Posted 6 days ago
1.0 - 5.0 years
0 Lacs
Bengaluru, Karnataka
On-site
Category Engineering Experience Sr. Associate Primary Address Bangalore, Karnataka Overview Voyager (94001), India, Bangalore, Karnataka Senior Associate- Data Engineer Do you love building and pioneering in the technology space? Do you enjoy solving complex business problems in a fast-paced, collaborative,inclusive, and iterative delivery environment? At Capital One India, you'll be part of a big group of makers, breakers, doers and disruptors, who solve real problems and meet real customer needs. We are seeking Data Engineers who are passionate about marrying data with emerging technologies. As a Capital One Data Engineer, you’ll have the opportunity to be on the forefront of driving a major transformation within Capital One. What You’ll Do: Collaborate with and across Agile teams to design, develop, test, implement, and support technical solutions in full-stack development tools and technologies Work with a team of developers with deep experience in machine learning, distributed microservices, and full stack systems Utilize programming languages like Java, Scala, Python and Open Source RDBMS and NoSQL databases and Cloud based data warehousing services such as Redshift and Snowflake Share your passion for staying on top of tech trends, experimenting with and learning new technologies, participating in internal & external technology communities, and mentoring other members of the engineering community Collaborate with digital product managers, and deliver robust cloud-based solutions that drive powerful experiences to help millions of Americans achieve financial empowerment Perform unit tests and conduct reviews with other team members to make sure your code is rigorously designed, elegantly coded, and effectively tuned for performance Basic Qualifications: Bachelor’s Degree At least 1.5 years of experience in application development (Internship experience does not apply) At least 1 year of experience in big data technologies Preferred Qualifications: 3+ years of experience in application development including Python, SQL, Scala, or Java 1+ years of experience with a public cloud (AWS, Microsoft Azure, Google Cloud) 2+ years experience with Distributed data/computing tools (MapReduce, Hadoop, Hive, EMR, Kafka, Spark, Gurobi, or MySQL) 1+ years experience working on real-time data and streaming applications 1+ years of experience with NoSQL implementation (Mongo, Cassandra) 1+ years of data warehousing experience (Redshift or Snowflake) 2+ years of experience with UNIX/Linux including basic commands and shell scripting 1+ years of experience with Agile engineering practices ***At this time, Capital One will not sponsor a new applicant for employment authorization for this position. No agencies please. Capital One is an equal opportunity employer (EOE, including disability/vet) committed to non-discrimination in compliance with applicable federal, state, and local laws. Capital One promotes a drug-free workplace. Capital One will consider for employment qualified applicants with a criminal history in a manner consistent with the requirements of applicable laws regarding criminal background inquiries, including, to the extent applicable, Article 23-A of the New York Correction Law; San Francisco, California Police Code Article 49, Sections 4901-4920; New York City’s Fair Chance Act; Philadelphia’s Fair Criminal Records Screening Act; and other applicable federal, state, and local laws and regulations regarding criminal background inquiries. If you have visited our website in search of information on employment opportunities or to apply for a position, and you require an accommodation, please contact Capital One Recruiting at 1-800-304-9102 or via email at RecruitingAccommodation@capitalone.com. All information you provide will be kept confidential and will be used only to the extent required to provide needed reasonable accommodations. For technical support or questions about Capital One's recruiting process, please send an email to Careers@capitalone.com Capital One does not provide, endorse nor guarantee and is not liable for third-party products, services, educational tools or other information available through this site. Capital One Financial is made up of several different entities. Please note that any position posted in Canada is for Capital One Canada, any position posted in the United Kingdom is for Capital One Europe and any position posted in the Philippines is for Capital One Philippines Service Corp. (COPSSC). This carousel contains a column of headings. Selecting a heading will change the main content in the carousel that follows. Use the Previous and Next buttons to cycle through all the options, use Enter to select. This carousel shows one item at a time. Use the preceding navigation carousel to select a specific heading to display the content here. How We Hire We take finding great coworkers pretty seriously. Step 1 Apply It only takes a few minutes to complete our application and assessment. Step 2 Screen and Schedule If your application is a good match you’ll hear from one of our recruiters to set up a screening interview. Step 3 Interview(s) Now’s your chance to learn about the job, show us who you are, share why you would be a great addition to the team and determine if Capital One is the place for you. Step 4 Decision The team will discuss — if it’s a good fit for us and you, we’ll make it official! How to Pick the Perfect Career Opportunity Overwhelmed by a tough career choice? Read these tips from Devon Rollins, Senior Director of Cyber Intelligence, to help you accept the right offer with confidence. Your wellbeing is our priority Our benefits and total compensation package is designed for the whole person. Caring for both you and your family. Healthy Body, Healthy Mind You have options and we have the tools to help you decide which health plans best fit your needs. Save Money, Make Money Secure your present, plan for your future and reduce expenses along the way. Time, Family and Advice Options for your time, opportunities for your family, and advice along the way. It’s time to BeWell. Career Journey Here’s how the team fits together. We’re big on growth and knowing who and how coworkers can best support you.
Posted 6 days ago
1.0 - 5.0 years
0 Lacs
Bengaluru, Karnataka
On-site
Voyager (94001), India, Bangalore, Karnataka Senior Associate- Data Engineer Do you love building and pioneering in the technology space? Do you enjoy solving complex business problems in a fast-paced, collaborative, inclusive, and iterative delivery environment? At Capital One India, you'll be part of a big group of makers, breakers, doers and disruptors, who solve real problems and meet real customer needs. We are seeking Data Engineers who are passionate about marrying data with emerging technologies. As a Capital One Data Engineer, you’ll have the opportunity to be on the forefront of driving a major transformation within Capital One. What You’ll Do: Collaborate with and across Agile teams to design, develop, test, implement, and support technical solutions in full-stack development tools and technologies Work with a team of developers with deep experience in machine learning, distributed microservices, and full stack systems Utilize programming languages like Java, Scala, Python and Open Source RDBMS and NoSQL databases and Cloud based data warehousing services such as Redshift and Snowflake Share your passion for staying on top of tech trends, experimenting with and learning new technologies, participating in internal & external technology communities, and mentoring other members of the engineering community Collaborate with digital product managers, and deliver robust cloud-based solutions that drive powerful experiences to help millions of Americans achieve financial empowerment Perform unit tests and conduct reviews with other team members to make sure your code is rigorously designed, elegantly coded, and effectively tuned for performance Basic Qualifications: Bachelor’s Degree At least 1.5 years of experience in application development (Internship experience does not apply) At least 1 year of experience in big data technologies Preferred Qualifications: 3+ years of experience in application development including Python, SQL, Scala, or Java 1+ years of experience with a public cloud (AWS, Microsoft Azure, Google Cloud) 2+ years experience with Distributed data/computing tools (MapReduce, Hadoop, Hive, EMR, Kafka, Spark, Gurobi, or MySQL) 1+ years experience working on real-time data and streaming applications 1+ years of experience with NoSQL implementation (Mongo, Cassandra) 1+ years of data warehousing experience (Redshift or Snowflake) 2+ years of experience with UNIX/Linux including basic commands and shell scripting 1+ years of experience with Agile engineering practices ***At this time, Capital One will not sponsor a new applicant for employment authorization for this position. No agencies please. Capital One is an equal opportunity employer (EOE, including disability/vet) committed to non-discrimination in compliance with applicable federal, state, and local laws. Capital One promotes a drug-free workplace. Capital One will consider for employment qualified applicants with a criminal history in a manner consistent with the requirements of applicable laws regarding criminal background inquiries, including, to the extent applicable, Article 23-A of the New York Correction Law; San Francisco, California Police Code Article 49, Sections 4901-4920; New York City’s Fair Chance Act; Philadelphia’s Fair Criminal Records Screening Act; and other applicable federal, state, and local laws and regulations regarding criminal background inquiries. If you have visited our website in search of information on employment opportunities or to apply for a position, and you require an accommodation, please contact Capital One Recruiting at 1-800-304-9102 or via email at RecruitingAccommodation@capitalone.com. All information you provide will be kept confidential and will be used only to the extent required to provide needed reasonable accommodations. For technical support or questions about Capital One's recruiting process, please send an email to Careers@capitalone.com Capital One does not provide, endorse nor guarantee and is not liable for third-party products, services, educational tools or other information available through this site. Capital One Financial is made up of several different entities. Please note that any position posted in Canada is for Capital One Canada, any position posted in the United Kingdom is for Capital One Europe and any position posted in the Philippines is for Capital One Philippines Service Corp. (COPSSC).
Posted 6 days ago
48.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
We are looking for a detail-oriented and technically strong ETL Quality Engineer to join our data engineering and QA team. The ideal candidate will be responsible for ensuring the accuracy, integrity, and reliability of data pipelines and ETL processes. You will work closely with data engineers, business analysts, and developers to validate and verify end-to-end data flows and transformations. Key Responsibilities Review and analyze data requirements, source-to-target mappings, and business rules. Design, develop, and execute comprehensive ETL test plans, test cases, and test scripts. Perform data validation, transformation testing, and reconciliation across source and target systems. Identify and document defects, inconsistencies, and data quality issues. Validate performance of ETL jobs and data pipelines under various workloads. Participate in code reviews, defect triage meetings, and QA strategy planning. Use SQL to query, validate, and compare large datasets across environments. Maintain and enhance test automation frameworks for data pipeline validation. Required Technical Skills Strong experience with ETL testing tools such as Informatica, Talend, SSIS, DataStage, or equivalent. Proficiency in SQL for complex queries, joins, aggregations, and data validation. Experience working with data warehouses, data lakes, or cloud-based data platforms (e.g., Snowflake, Redshift, BigQuery, Azure Synapse). Hands-on experience with test automation tools and frameworks related to data testing (e.g., Python, PyTest, DBT, Great Expectations). Knowledge of data profiling, data cleansing, and data governance practices. Familiarity with version control systems (e.g., Git) and CI/CD pipelines (e.g., Jenkins, Azure DevOps). Exposure to API testing for data integrations and ingestion pipelines (Postman, SoapUI, REST/SOAP APIs). Candidate Profile Bachelors degree in Computer Science, Information Technology, or a related field. 48 years of experience in data quality engineering or ETL QA roles. Excellent analytical and problem-solving skills. Strong communication and documentation abilities. Experience working in Agile/Scrum teams. Preferred Qualifications Experience with cloud platforms like AWS, Azure, or GCP. Familiarity with Big Data ecosystems (e.g., Hadoop, Spark, Hive). DataOps or DevOps experience is a plus. Certification in data or QA-related domains (ISTQB, Microsoft, AWS Data Analytics, etc.) Why Join Us? Work with modern data platforms and contribute to enterprise data quality initiatives. Be a key player in ensuring trust and confidence in business-critical data. Collaborate with cross-functional data, engineering, and analytics teams. Enjoy a culture that promotes growth, learning, and innovation (ref:hirist.tech)
Posted 1 week ago
0 years
0 Lacs
Indore, Madhya Pradesh, India
On-site
Attention All Data Folks!!! We are Hiring Sr/ Lead Data Engineer in Indore, MP (Hybrid Model) Below is the JD for the reference Sr/Lead Data Engineer Indore, MP (Hybrid) Full Time [Key Responsibilities]: Gather and assemble large, complex sets of data that meet non-functional and functional business requirements.Skills: SQL, Python, R, Data Modeling, Data Warehousing, AWS (S3, Athena) Create new data pipelines or enhance existing pipelines to accommodate non-standard data formats from customers.Skills: ETL Tools (e.g., Apache NiFi, Talend), Python (Pandas, PySpark), AWS Glue, JSON, XML, YAML Identify, design, and implement internal process improvements, including re-designing infrastructure for greater scalability, optimizing data delivery, and automating manual processes.Skills: Apache Airflow, Terraform, Kubernetes, AWS Lambda, CI/CD pipelines, Docker Build and maintain required infrastructure for optimal extraction, transformation, and loading (ETL) of data from various data sources using AWS and SQL technologies.Skills: SQL, AWS Redshift, AWS RDS, EMR (Elastic MapReduce), Snowflake Use existing methods or develop new tools/methods to analyze the data and perform required data sanity validations to ensure completeness and accuracy as per technical and functional requirements.Skills: Python (NumPy, Pandas), Data Validation Tools, Tableau, Power BI Work with stakeholders including Customer Onboarding, Delivery, Product, and other functional teams, assisting them with any data-related technical or infrastructure-related issues.Skills: Stakeholder Communication, JIRA, Agile Methodologies Provide actionable insights into key data metrics (volumes, trends, outliers, etc.), highlight any challenges/improvements, and provide recommendations and solutions to relevant stakeholders.Skills: Data Analysis, Data Visualization Tools (Tableau, Looker), Advanced Excel Coordinate with the Technical Program Manager (TPM) to prioritize discovered issues in the Data Sanity Report and own utility communications Skills: Project Management Tools, Reporting Tools, Clear Documentation Practices. [About Ccube] Ccube: Pioneering Data-Driven Solutions in the Cloud Ccube is a specialized firm that delivers measurable results across a wide range of industries by focusing exclusively on Data and Artificial Intelligence within Cloud environments. We leverage cutting-edge technologies and innovative strategies to help our clients harness the power of data and achieve their business objectives. Core Competencies: Strategic Planning and Design of Data Systems: We collaborate with our clients to develop comprehensive data strategies and design robust data systems that align with their business goals. Our team of experts provides guidance on data architecture, data governance, and data management best practices. Development and Unification of Data Frameworks: We build and integrate data frameworks that enable seamless data flow and analysis. Our solutions facilitate data ingestion, data transformation, and data storage, ensuring data is readily available for business intelligence and decision-making. Advanced Data Analysis and Artificial Intelligence Applications: We employ sophisticated data analysis techniques and artificial intelligence algorithms to extract valuable insights from data. Our solutions include predictive modeling, machine learning, and natural language processing, enabling our clients to make data-driven decisions and optimize their operations. Cloud Computing, Data Operations, and Machine Learning Operations: We leverage the scalability and flexibility of cloud computing to deliver efficient and cost-effective data solutions. Our team of experts manages data operations and machine learning operations, ensuring seamless integration and optimal performance. Organizational Principles at Ccube: At Ccube, we are guided by a set of core principles that shape our culture and drive our success: Efficiency: We strive to maximize efficiency by optimizing resource utilization and streamlining processes. Client Satisfaction: We are committed to providing exceptional service and exceeding our clients' expectations. Innovation: We embrace innovation and continuously explore new technologies and approaches to deliver cutting-edge solutions. Humility: We maintain professional modesty and recognize that there is always room for improvement. Employee Advantages: Ccube offers a stimulating and rewarding work environment with numerous benefits for our employees: Dynamic Startup Environment: We provide a fast-paced and entrepreneurial environment where employees can learn, grow, and make a significant impact. Career Growth Opportunities: We offer ample opportunities for career advancement and professional development. Performance-Based Incentives: We reward high-performing employees with competitive compensation and performance-based bonuses. Equity Participation: We offer equity participation options to eligible employees, providing them with ownership and a stake in the company's success. Professional Development Reimbursement: We encourage continuous learning and reimburse employees for eligible professional development expenses. Join Ccube and be part of a team that is shaping the future of data and AI in the cloud. Powered by JazzHR oE7x9xRa5q
Posted 1 week ago
10.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
About Zeta Zeta is a Next-Gen Banking Tech company that empowers banks and fintechs to launch banking products for the future. It was founded by Bhavin Turakhia and Ramki Gaddipati in 2015.Our f lagship processing platform - Zeta Tachyon - is the industry’s first modern, cloud-native, and fully API-enabled stack that brings together issuance, processing, lending, core banking, fraud & risk, and many more capabilities as a single-vendor stack. 15M+ cards have been issued on our platform globally. Zeta is actively working with the largest Banks and Fintechs in multiple global markets transforming customer experience for multi-million card portfolios. Zeta has over 1,700+ employees across the US, EMEA, and Asia, with 70%+ roles in R&D . Backed by SoftBank, Mastercard, and other investors , we raised $330M at a $2B valuation in 2025 . Learn more @ www.zeta.tech , careers.zeta.tech , Linkedin , Twitter About The Role In this role, you’ll design robust data models using SQL, DBT and Redshift, while driving best practices across development, deployment, and monitoring. You'll also collaborate closely with product and engineering to ensure data quality and impactful delivery. Responsibilities Create optimised data models with SQL, DBT and Redshift Write functional and column level test for Models Build reports from the data models Collaborate with product to clarify requirement and create design document Get design reviewed from Architect/Principal/Lead Engineer Contribute to code reviews Set up and monitor Airflow DAGs Set up and use CI/CD pipelines Leverage Kubernetes operators for deployment automation Ensure data quality Drive best practices in Data models development, deployment, and monitoring Mentor colleagues and contribute to team growth Skills Bachelor’s/Master’s degree in engineering In-depth expertise in SQL and Python programming. Strong expertise in SQL for complex data querying and optimization Hands-on experience with Apache Airflow for orchestration and scheduling Solid understanding of data modeling and data warehousing concepts Experience with dbt (Data Build Tool) for data transformation and modeling Exposure to Amazon Redshift or other cloud data warehouses Familiarity with CI/CD tools such as Jenkins Experience using Bitbucket for version control Monitoring and alerting using Grafana and Prometheus Working knowledge of JIRA for agile project tracking Familiarity with Kubernetes for deployment automation and orchestration Experience And Qualifications 10-15 years of relevant experience in data engineering . Bachelor ’s/Master’s degree in engineering (computer science, information systems) Equal Opportunity Zeta is an equal opportunity employer. We celebrate diversity and are committed to creating an inclusive environment for all employees. We encourage applicants from all backgrounds, cultures, and communities to apply and believe that a diverse workforce is key to our success.
Posted 1 week ago
3.0 - 8.0 years
20 - 35 Lacs
Bengaluru
Hybrid
Shift: (GMT+05:30) Asia/Kolkata (IST) What do you need for this opportunity Must have skills required: Golang, Python, Java Requirements : We are looking for a Backend Engineer to help us through the next level of technology changes needed to revolutionize Healthcare for India. We are seeking individuals who can understand real-world scenarios and come up with scalable tech solutions for millions of patients to make healthcare accessible. The role comes with a good set of challenges to solve, and offers an opportunity to build new systems that will be rolled out at scale. You have 4 to 7 years or more of software development experience with expertise in designing and implementing high-performance web applications. Very strong understanding and experience with any of Java, Scala, GoLang, Python. Experience writing optimized queries in relational databases like Mysql, redshift / Postgres. You have exposure to basic data engineering concepts like data pipeline, hadoop or spark Write clean and testable code. You love to build platforms that enable other teams to build on top of. Some of challenges we solve include: Clinical decision support Early Detection: Digitally assist doctors in identifying high-risk patients for early intervention Track & Advice: Analyze patients vitals/test values across visits to assist doctors in personalizing chronic care. Risk Prevention: Assist doctors in monitoring the progression of chronic disease by drawing attention to additional symptoms and side effects. EMR (Electronic Medical Records): Clinical software to write prescriptions and manage clinical records AI-powered features Adapts to doctors practice: Learns from doctors prescribing preferences and provides relevant auto-fill recommendations for faster prescriptions. Longitudinal patient journey: AI analyses the longitudinal journey of patients to assist doctors in early detection. Medical language processing: AI-driven automatic digitization of printed prescriptions and test reports. Core platform Pharma advertising platform to doctors at the moment of truth Real world evidence to generate market insights for B2B consumption Virtual Store Online Pharmacy + Diagnostic solutions helping patients with one-click order Technologies we use : Distributed Tech: Kafka, Elastic search Databases: MongoDB, RDS Cloud platform: AWS Languages: Go-lang, python, PHP UI Tech: React, react native Caching: Redis Big Data: AWS Athena, Redshift APM: NewRelic Responsibilities : Develop well testable and reusable services with structured, granular and well-commented code. Contribute in the area of API building, data pipeline setup, and new tech initiatives needed for a core platform Acclimate to new technologies and situations as per the company demands and requirements with the vision of providing best customer experience Meet expected deliverables and quality standards with every release Collaborate with teams to design, develop, test and refine deliverables that meet the objectives Perform code reviews and implement improvement plans Additional Responsibilities : Pitch-in during the phases of design and architectural solutions of Business problems. Organize, lead and motivate the development team to meet expected timelines and quality standards across releases. Actively contribute to development process improvement plans. Assist peers by code reviews and juniors through mentoring. Must have Skills : Sound understanding of Computer Science fundamentals including Data Structures and Space and Time complexity. Excellent problem solving skills Solid understanding of any of the modern Object oriented programming languages (like Java, Ruby or Python) and or Functional languages (like Scala, GoLang) Understanding of MPP (Massive parallel processing) and frameworks like Spark Experience working with Databases (RDBMS - Mysql, Redshift etc, NoSQL - Couchbase / MongoDB / Cassandra etc). Experience working with open source libraries and frameworks. Strong hold on versioning tools Git/Bitbucket. Good to have Skills : Knowledge of MicroServices architecture. Have experience working with Kafka Experience or Exposure to ORM frameworks (like ActiveRecord, SQLAlchemy etc). Working knowledge of full text search (like ElasticSearch, Solr etc).
Posted 1 week ago
2.0 - 4.0 years
4 - 8 Lacs
Bengaluru
Hybrid
About the Role Love deep data? Love discussing solutions instead of problems? Then you could be our next Data Scientist. In a nutshell, your primary responsibility will be enhancing the productivity and utilization of the generated data. Other things you will do are: Work closely with the business stakeholders Transform scattered pieces of information into valuable data Share and present your valuable insights with peers What You Will Do Develop models and run experiments to infer insights from hard data Improve our product usability and identify new growth opportunities Understand reseller preferences to provide them with the most relevant products Designing discount programs to help our resellers sell more Help resellers better recognize end-customer preferences to improve their revenue Use data to identify bottlenecks that will help our suppliers meet their SLA requirements Model seasonal demand to predict key organizational metrics Mentor junior data scientists in the team What You Will Need Bachelor's/Master's degree in computer science (or similar degrees) 2-4 years of experience as a Data Scientist in a fast-paced organization, preferably B2C Familiarity with Neural Networks, Machine Learning, etc. Familiarity with tools like SQL, R, Python, etc. Strong understanding of Statistics and Linear Algebra Strong understanding of hypothesis/model testing and ability to identify common model testing errors Experience designing and running A/B tests and drawing insights from them Proficiency in machine learning algorithms Excellent analytical skills to fetch data from reliable sources to generate accurate insights Experience in tech and product teams is a plus Bonus points for: Experience in working on personalization or other ML problems Familiarity with Big Data tech stacks like Apache Spark, Hadoop, Redshift, etc.
Posted 1 week ago
3.0 - 6.0 years
20 - 30 Lacs
Bengaluru
Work from Office
Job Title: Data Engineer II (Python, SQL) Experience: 3 to 6 years Location: Bangalore, Karnataka (Work from office, 5 days a week) Role: Data Engineer II (Python, SQL) As a Data Engineer II, you will work on designing, building, and maintaining scalable data pipelines. Youll collaborate across data analytics, marketing, data science, and product teams to drive insights and AI/ML integration using robust and efficient data infrastructure. Key Responsibilities: Design, develop and maintain end-to-end data pipelines (ETL/ELT). Ingest, clean, transform, and curate data for analytics and ML usage. Work with orchestration tools like Airflow to schedule and manage workflows. Implement data extraction using batch, CDC, and real-time tools (e.g., Debezium, Kafka Connect). Build data models and enable real-time and batch processing using Spark and AWS services. Collaborate with DevOps and architects for system scalability and performance. Optimize Redshift-based data solutions for performance and reliability. Must-Have Skills & Experience: 3+ years in Data Engineering or Data Science with strong ETL and pipeline experience. Expertise in Python and SQL . Strong experience in Data Warehousing , Data Lakes , Data Modeling , and Ingestion . Working knowledge of Airflow or similar orchestration tools. Hands-on with data extraction techniques like CDC , batch-based, using Debezium, Kafka Connect, AWS DMS . Experience with AWS Services : Glue, Redshift, Lambda, EMR, Athena, MWAA, SQS, etc. Knowledge of Spark or similar distributed systems. Experience with queuing/messaging systems like SQS , Kinesis , RabbitMQ .
Posted 1 week ago
7.0 years
0 Lacs
Kochi, Kerala, India
On-site
Skills – Mandatory SQL, AWS, Python, Skills - Primary SQL, PostgreSQL, Python, Amazon Athena Skills - Good to have AWS, PostgreSQL, Amazon Athena Qualification Bachelor’s Degree Total Experience 7+ years Relevant Experience 7+ years Work Location Cochin/TVM Expected date of onboarding immediate Number of Positions 1 Work shifts – General / US / UK General Job Purpose We are seeking an experienced and analytical Senior Data Analyst to join our Data & Analytics team. The ideal candidate will have a strong background in data analysis, visualization, and stakeholder communication. You will be responsible for turning data into actionable insights that help shape strategic and operational decisions across the organization. Job Description / Duties & Responsibilities • Collaborate with business stakeholders to understand data needs and translate them into analytical requirements. • Analyze large datasets to uncover trends, patterns, and actionable insights. • Statistical techniques to find trends, correlations, or patterns. • Design and build dashboards and reports using Power BI. • Perform ad-hoc analysis and develop data-driven narratives to support decision-making. • Ensure data accuracy, consistency, and integrity through data validation and quality checks. • Build and maintain SQL queries, views, and data models for reporting purposes. • Communicate findings clearly through presentations, visualizations, and written summaries. • Partner with data engineers and architects to improve data pipelines and architecture. • Contribute to the definition of KPIs, metrics, and data governance standards. • Perform exploratory data analysis (EDA) • Use statistical techniques to find trends, correlations, or patterns • Identify key insights that inform business decisions Job Specification / Skills and CompetenciesJob Description 2 • Bachelor’s or Master’s degree in Statistics, Mathematics, Computer Science, Economics, or a related field. • 7+ years of experience in a data analyst or business intelligence role. • Advanced proficiency in SQL and experience working with relational databases (e.g., SQL Server, Redshift, Snowflake). • Hands-on experience in Power BI is preferred. • Proficiency in Python, Excel and data storytelling. • Basic predictive modelling or ML prototyping • Understanding of data modelling, ETL concepts, and basic data architecture. • Strong analytical thinking and problem-solving skills. • Excellent communication and stakeholder management skills • To adhere to the Information Security Management policies and procedures. Soft Skills Required ▪ Must be a good team player with good communication skills ▪ Must have good presentation skills ▪ Must be a pro-active problem solver and a leader by self ▪ Manage & nurture a team of data engineers
Posted 1 week ago
5.0 years
0 Lacs
Thiruvananthapuram, Kerala, India
Remote
Job Description 🔹Position: Senior Data Analyst 📍Location: Trivandrum/Kochi / Remote 🕓 Experience: 5+ Years ⌛ Notice Period: Immediate Joiners Only 🛠 Mandatory Skills: SQL, Power BI, Python, Amazon Athena 🔎 Job Purpose We are seeking an experienced and analytical Senior Data Analyst to join our Data & Analytics team. The ideal candidate will have a strong background in data analysis, visualization, and stakeholder communication. You will be responsible for turning data into actionable insights that help shape strategic and operational decisions across the organization. 📍Job Description / Duties & Responsibilities Collaborate with business stakeholders to understand data needs and translate them into analytical requirements. Analyze large datasets to uncover trends, patterns, and actionable insights. Design and build dashboards and reports using Power BI. Perform ad-hoc analysis and develop data-driven narratives to support decision-making. Ensure data accuracy, consistency, and integrity through data validation and quality checks. Build and maintain SQL queries, views, and data models for reporting purposes. Communicate findings clearly through presentations, visualizations, and written summaries. Partner with data engineers and architects to improve data pipelines and architecture. Contribute to the definition of KPIs, metrics, and data governance standards. 📍Job Specification / Skills and Competencies Bachelor's or Master's degree in Statistics, Mathematics, Computer Science, Economics, or a related field. 5+ years of experience in a data analyst or business intelligence role. Advanced proficiency in SQL and experience working with relational databases (e.g. SQL Server, Redshift, Snowflake). Hands-on experience in Power BI. Proficiency in Python, Excel and data storytelling. Understanding of data modelling, ETL concepts, and basic data architecture. Strong analytical thinking and problem-solving skills. Excellent communication and stakeholder management skills To adhere to the Information Security Management policies and procedures. 📍Soft Skills Required Must be a good team player with good communication skills Must have good presentation skills Must be a pro-active problem solver and a leader by self Manage & nurture a team of data engineers
Posted 1 week ago
6.0 - 10.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Dear Candidate, Greetings from TCS !!! TCS is hiring for AWS Data Engineer , please find the below JD.. Experience range – 6 to 10 years Location - Hyderabad, Kolkata Skills Required - AWS Glue, Lambda, Athena Role & Responsibilitie s – Good hands-on experience in Python programming and Pyspark/Scala Data Engineering experience using AWS core services (Lambda, Glue, EMR, S3 and RedShift) Good knowledge on any of the ETL tools and SQL’s
Posted 1 week ago
5.0 - 15.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
TCS Hiring for HPC Engineer (High- Performance Computing Engineer)_PAN India Experience: 5 to 15 Years Only Job Location: PAN India TCS Hiring for HPC Engineer (High- Performance Computing Engineer)_PAN India Required Technical Skill Set: AWS Parallel Cluster, R Workbench, Batch, ECS, Kubernetes AWS Services: EC2, RDS, S3, RedShift, FSX, EFS, IAM, Batch, Cloud Formation Desired Competencies (Technical/Behavioral Competency) Primary Focus : Developing and running Bioinformatic workflows/pipelines leveraging and managing WDL engines like miniWDL, Slurm and R on AWS cloud utilizing technologies like AWS Parallel Cluster, R Workbench, Batch, ECS, Kubernetes etc. Technical Skills : AWS Services: EC2, RDS, S3, RedShift, FSX, EFS, IAM, Batch, Cloud Formation etc. Python & bash scripting Experience with Slurm (Configuring Slurm partitions, converting standalone jobs to Slurm etc.) Experience with R Workbench & Package manager – Install & configure R Workbench with scalable backend such as Slurm, Kubernetes etc. Experience with Dockers Key Responsibilities : Developing and running Bioinformatic workflows/pipelines leveraging and managing WDL engines like miniWDL, Slurm and R on AWS cloud utilizing technologies like AWS Parallel Cluster, R Workbench, Batch, ECS, Kubernetes etc. Good-to-Have: 1. Should have knowledge on Dockers ECS, RDS, S3 2. Good Knowledge in developing and running Bioinformatic workflows/pipelines leveraging and managing WDL engines like miniWDL, Slurm Kind Regards, Priyankha M
Posted 1 week ago
7.0 - 12.0 years
22 - 25 Lacs
India
On-site
TECHNICAL ARCHITECT Key Responsibilities 1. Designing technology systems: Plan and design the structure of technology solutions, and work with design and development teams to assist with the process. 2. Communicating: Communicate system requirements to software development teams, and explain plans to developers and designers. They also communicate the value of a solution to stakeholders and clients. 3. Managing Stakeholders: Work with clients and stakeholders to understand their vision for the systems. Should also manage stakeholder expectations. 4. Architectural Oversight: Develop and implement robust architectures for AI/ML and data science solutions, ensuring scalability, security, and performance. Oversee architecture for data-driven web applications and data science projects, providing guidance on best practices in data processing, model deployment, and end-to-end workflows. 5. Problem Solving: Identify and troubleshoot technical problems in existing or new systems. Assist with solving technical problems when they arise. 6. Ensuring Quality: Ensure if systems meet security and quality standards. Monitor systems to ensure they meet both user needs and business goals. 7. Project management: Break down project requirements into manageable pieces of work, and organise the workloads of technical teams. 8. Tool & Framework Expertise: Utilise relevant tools and technologies, including but not limited to LLMs, TensorFlow, PyTorch, Apache Spark, cloud platforms (AWS, Azure, GCP), Web App development frameworks and DevOps practices. 9. Continuous Improvement: Stay current on emerging technologies and methods in AI, ML, data science, and web applications, bringing insights back to the team to foster continuous improvement. Technical Skills 1. Proficiency in AI/ML frameworks such as TensorFlow, PyTorch, Keras, and scikit-learn for developing machine learning and deep learning models. 2. Knowledge or experience working with self-hosted or managed LLMs. 3. Knowledge or experience with NLP tools and libraries (e.g., SpaCy, NLTK, Hugging Face Transformers) and familiarity with Computer Vision frameworks like OpenCV and related libraries for image processing and object recognition. 4. Experience or knowledge in back-end frameworks (e.g., Django, Spring Boot, Node.js, Express etc.) and building RESTful and GraphQL APIs. 5. Familiarity with microservices, serverless, and event-driven architectures. Strong understanding of design patterns (e.g., Factory, Singleton, Observer) to ensure code scalability and reusability. 6. Proficiency in modern front-end frameworks such as React, Angular, or Vue.js, with an understanding of responsive design, UX/UI principles, and state management (e.g., Redux) 7. In-depth knowledge of SQL and NoSQL databases (e.g., PostgreSQL, MongoDB, Cassandra), as well as caching solutions (e.g., Redis, Memcached). 8. Expertise in tools such as Apache Spark, Hadoop, Pandas, and Dask for large-scale data processing. 9. Understanding of data warehouses and ETL tools (e.g., Snowflake, BigQuery, Redshift, Airflow) to manage large datasets. 10. Familiarity with visualisation tools (e.g., Tableau, Power BI, Plotly) for building dashboards and conveying insights. 11. Knowledge of deploying models with TensorFlow Serving, Flask, FastAPI, or cloud-native services (e.g., AWS SageMaker, Google AI Platform). 12. Familiarity with MLOps tools and practices for versioning, monitoring, and scaling models (e.g., MLflow, Kubeflow, TFX). 13. Knowledge or experience in CI/CD, IaC and Cloud Native toolchains. 14. Understanding of security principles, including firewalls, VPC, IAM, and TLS/SSL for secure communication. 15. Knowledge of API Gateway, service mesh (e.g., Istio), and NGINX for API security, rate limiting, and traffic management. Experience Required Technical Architect with 7 - 12 years of experience Salary 22-25 LPA Job Types: Full-time, Permanent Pay: ₹2,200,000.00 - ₹2,500,000.00 per year Location Type: In-person Work Location: In person
Posted 1 week ago
4.0 - 7.0 years
0 Lacs
Hyderābād
On-site
Zenoti provides an all-in-one, cloud-based software solution for the beauty and wellness industry. Our solution allows users to seamlessly manage every aspect of the business in a comprehensive mobile solution: online appointment bookings, POS, CRM, employee management, inventory management, built-in marketing programs and more. Zenoti helps clients streamline their systems and reduce costs, while simultaneously improving customer retention and spending. Our platform is engineered for reliability and scale and harnesses the power of enterprise-level technology for businesses of all sizes Zenoti powers more than 30,000 salons, spas, medspas and fitness studios in over 50 countries. This includes a vast portfolio of global brands, such as European Wax Center, Hand & Stone, Massage Heights, Rush Hair & Beauty, Sono Bello, Profile by Sanford, Hair Cuttery, CorePower Yoga and TONI&GUY. Our recent accomplishments include surpassing a $1 billion unicorn valuation, being named Next Tech Titan by GeekWire, raising an $80 million investment from TPG, ranking as the 316th fastest-growing company in North America on Deloitte's 2020 Technology Fast 500™. We are also proud to be recognized as a Great Place to Work CertifiedTM for 2021-2022 as this reaffirms our commitment to empowering people to feel good and find their greatness. To learn more about Zenoti visit: https://www.zenoti.com Our products are built on Windows .NET and SQL Server and managed in AWS. Our web Ux stack is built on jQuery and some areas use AngularJS. Our middle tier is in C# and we build our infrastructure on an extensive set of Restful APIs. We build native iOS and Android apps, and are starting to experiment with Flutter and Dart. For select infrastructure components we use Python extensively, and use Tableau for analytics dashboards. We use Redshift, Aurora, Redis Elasticache, Lambda, and other AWS products to build and manage our complete service, moving towards server-less components. We deal with billions of API calls, millions of records in databases, and terabytes of data to be managed with all services we build that have to run 24x7 at 99.99% availability. What will I be doing? Design, develop, test, release and maintain components of Zenoti Collaborate with a team of PM, DEV, and QA to release features Work in a team following agile development practices (SCRUM) Build usable software, released at high quality, runs at scale and is adopted by customers Learn to scale your features to handle 2x ~ 4x growth every year and manage code that has to deal with millions of records, and terabytes of data Release new features into production every month, and get real feedback from thousands of customers to refine your designs Be proud of what you work on, obsess about the quality of the work you produce What skills do I need? 4 to 7 years of experience in designing and developing applications on the Microsoft stack Strong background in building web applications Strong experience in HTML, JavaScript, CSS, jQuery, .NET/IIS with C# Proficient in working with Microsoft SQL Server Experience in developing web applications using Angular/Flutter/Dart a plus Strong logical, analytical and problem-solving skills Excellent communication skills Can work in a fast-paced, ever-changing, startup environment Why Zenoti? Be part of an innovative company that is revolutionizing the wellness and beauty industry. Work with a dynamic and diverse team that values collaboration, creativity, and growth. Opportunity to lead impactful projects and help shape the global success of Zenoti's platform. Attractive compensation. Medical coverage for yourself and your immediate family. Access to regular yoga, meditation, breathwork, and stress management sessions. We also include your family in benefit awareness initiatives. Regular social activities, and opportunities to give back through social work and community initiatives. Zenoti provides equal employment opportunities to all employees and applicants for employment and prohibits discrimination and harassment of any type without regard to race, color, religion, age, sex, national origin, disability status, genetics, protected veteran status, sexual orientation, gender identity or expression, or any other characteristic protected by federal, state, or local laws. This policy applies to all terms and conditions of employment, including recruiting, hiring, placement, promotion, termination, layoff, recall, transfer, leaves of absence, compensation and training.
Posted 1 week ago
3.0 years
6 - 8 Lacs
Hyderābād
On-site
- 3+ years of analyzing and interpreting data with Redshift, Oracle, NoSQL etc. experience - Experience with data modeling, warehousing and building ETL pipelines - Experience with data visualization using Tableau, Quicksight, or similar tools - Experience writing complex SQL queries - Experience using SQL to pull data from a database or data warehouse and scripting experience (Python) to process data for modeling - Experience in Statistical Analysis packages such as R, SAS and Matlab Do you enjoy diving deep into data, building data models and developing business metrics to generate actionable insights? Are you looking for an opportunity to define end to end analytics roadmap, work with cross functional teams and leverage cutting edge modern technologies and cloud solutions to develop analytics products. DSP Analytics team has an exciting opportunity for a Business Intelligence Engineer (BIE) to improve Amazon’s Delivery Service Partner (DSP) program through impactful data solutions. The goal of Amazon’s DSP organization is to exceed the expectations of our customers by ensuring that their orders, no matter how large or small, are delivered as quickly, accurately, and cost effectively as possible. To meet this goal, Amazon is continually striving to innovate and provide best in class delivery experience through the introduction of pioneering new products and services in the last mile delivery space. We are looking for an innovative, highly-motivated and experienced BIE who can think holistically about problems to understand how systems work together to identify and execute both tactical and strategic projects. You will work closely with engineering teams, product managers, program managers and org leaders to deliver end-to-end data solutions aimed at continuously enhancing overall DSP performance and delivery quality. The business coverage is broad, and you will identify and prioritize what matters most for the business, quantify what is (or is not) working, invent and simplify the current process and develop self-serve data and reporting solutions. You should have excellent business and communication skills to be able to work with business owners to define roadmap, develop milestones, define key business questions, and build data-sets that answers those questions. The ideal candidate should have hands-on SQL and scripting language experience and excel in designing, implementing, and operating stable, scalable, low-cost solutions to flow data from production systems into the data warehouse and into end-user facing applications. Key job responsibilities - Lead the design, implementation, and delivery of BI solutions for the Sub-Same Day (SSD) DSP Performance. - Manage and execute entire projects from start to finish including stakeholder management, data gathering and manipulation, modeling, problem solving, and communication of insights and recommendations. - Extract, transform, and load data from many data sources using SQL, Scripting and other ETL tools. - Design, build, and maintain automated reporting, dashboards, and ongoing analysis to enable data driven decisions across our team and with partner teams. - Report key insight trends using statistical rigor to simplify and inform the larger team of noteworthy trends that impact the business. - Retrieve and analyze data using a broad set of Amazon’s data technologies (ex. Redshift, AWS S3, Amazon Internal Platforms/Solutions) and resources, knowing how, when, and which to use. - Earn the trust of your customers and stakeholders by continuing to constantly obsess over their business use cases and data needs, and helping them solve their problems by leveraging technology. - Work closely with business stakeholders and senior leadership team to review roadmap and contributing to business strategy and how they can leverage analytics for success. About the team We are the core Amazon DSP BI team with the vision to enable data, insights and science driven decision-making. We have exceptionally talented and fun loving team members. In our team, you will have the opportunity to dive deep into complex business and data problems, drive large scale technical solutions and raise the bar for operational excellence. We love to share ideas and learning with each other. We are a relatively new team and do not carry legacy operational burden. We believe in promoting and using ideas to disrupt the status quo. Per the internal transfers guidelines, please reach out to the hiring manager for an informational through the "Request Informational" button on the job page. Experience with AWS solutions such as EC2, DynamoDB, S3, and Redshift Experience in data mining, ETL, etc. and using databases in a business environment with large-scale, complex datasets Experience developing and presenting recommendations of new metrics allowing better understanding of the performance of the business Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner.
Posted 1 week ago
4.0 - 6.0 years
0 Lacs
Mumbai Metropolitan Region
On-site
Position: Technical Support Specialist (Japanese) Location: Mumbai, India About LRN Do you want to use your support expertise to help people around the world do the right thing? Join us at LRN to be a part of a global company where you can make an impact. LRN is a SaaS based e-Learning provider with a presence across US, EMEA, APAC and LatAm. More than 2,500 companies worldwide (including some of the world's most recognizable brands) utilize LRN services and leverage LRN e-learning courses to help navigate complex regulatory environments and foster ethical, responsible, and inclusive cultures. In partnership with LRN, companies translate their values into concrete corporate practices, training materials, and leadership behaviors that create a sustainable competitive advantage. By acting upon shared values, companies and their people find the means to out behave and outperform. About the role: LRN team develops data specifically for a set of key product domains like eLearning, Disclosures, personalization and completions. We deal in AWS technologies like Redshift, S3, Glue, and Lambda. You'll build our data lake and partner with Product and BI teams to build new behavioral events, pipelines, datasets, models, and reporting to support their initiatives. You'll also assist to develop our analytics capabilities in Quicksight and build out our real time reporting capabilities. Your responsibilities will include: Provide support to internal and external customers on all aspects of LRN's proprietary applications (functions include: campaign set up, system generated e-mails, system reporting, user data management, and customization) Using defined systems and processes, keep both internal and external stakeholders updated as to the status of call tickets, requests, projects, issues, and changes. Update necessary tracking and reporting systems to ensure that group statistics can be tracked, managed and measured. Manage expectations of internal and external customers, ensuring capabilities are not exceeded to the detriment of the customer. Escalate issues related to capabilities where appropriate. Serve as an internal advocate for field personnel, as well as external customers. Prioritize requests based on need and impact. Work with internal stakeholders to address priorities. Prepare and present (in written and verbal forms) product information that will assist customers with the capabilities of LRN's systems. Maintain quality levels for all work related to customers' requests. Ensure there is an ongoing dialog between LRN and each customer with whom the Partner Assistance Center team member is working. Collaborate with peers to discuss unique solutions and to document them. Perform other duties as assigned. Requirements Bachelor's Degree 4-6 years of L2/L3 level experience in a customer facing role in an application support environment. Proficient in Japanese language (Written and spoken)- Mandatory Excellent communication skills in English. This position requires above par skills to write and speak in English. Extensive experience in supporting enterprise customers based in US and Europe. Familiarity with HTML and CSS. Familiarity with Learning Management Systems Expertise in Microsoft Office Suite especially in Excel. Should be able to understand and handle CSV files. Ability to review a Partner request, and apply the necessary analytical skills to ensure successful delivery of the end product. This role will involve direct Partner contact; an ability to work with others who are potentially very frustrated, in a professional manner is mandatory. An inherent ability to multitask and manage customer expectations to the satisfaction of all involved parties is critical to the success of the incumbent. Ability to consistently spot and report issues before they reach a point of becoming critical. A desire to constantly examine issues, looking for the opportunity to improve processes and/or technology. Must have an inherent attention to detail Benefits LRN is an equal opportunity employer. We celebrate diversity and are committed to creating an inclusive environment for all employees.
Posted 1 week ago
5.0 - 15.0 years
0 Lacs
New Delhi, Delhi, India
On-site
TCS Hiring for HPC Engineer (High- Performance Computing Engineer)_PAN India Experience: 5 to 15 Years Only Job Location: PAN India TCS Hiring for HPC Engineer (High- Performance Computing Engineer)_PAN India Required Technical Skill Set: AWS Parallel Cluster, R Workbench, Batch, ECS, Kubernetes AWS Services: EC2, RDS, S3, RedShift, FSX, EFS, IAM, Batch, Cloud Formation Desired Competencies (Technical/Behavioral Competency) Primary Focus : Developing and running Bioinformatic workflows/pipelines leveraging and managing WDL engines like miniWDL, Slurm and R on AWS cloud utilizing technologies like AWS Parallel Cluster, R Workbench, Batch, ECS, Kubernetes etc. Technical Skills : AWS Services: EC2, RDS, S3, RedShift, FSX, EFS, IAM, Batch, Cloud Formation etc. Python & bash scripting Experience with Slurm (Configuring Slurm partitions, converting standalone jobs to Slurm etc.) Experience with R Workbench & Package manager – Install & configure R Workbench with scalable backend such as Slurm, Kubernetes etc. Experience with Dockers Key Responsibilities : Developing and running Bioinformatic workflows/pipelines leveraging and managing WDL engines like miniWDL, Slurm and R on AWS cloud utilizing technologies like AWS Parallel Cluster, R Workbench, Batch, ECS, Kubernetes etc. Good-to-Have: 1. Should have knowledge on Dockers ECS, RDS, S3 2. Good Knowledge in developing and running Bioinformatic workflows/pipelines leveraging and managing WDL engines like miniWDL, Slurm Kind Regards, Priyankha M
Posted 1 week ago
0 years
3 - 4 Lacs
Mumbai
On-site
Company Description Forbes Advisor i s a new initiative for consumers under the Forbes Marketplace umbrella that provides journalist- and expert-written insights, news and reviews on all things personal finance, health, business, and everyday life decisions. We believe in the power of entrepreneurial capitalism and use it on various platforms to ignite the conversations that drive systemic change in business, culture, and society. We celebrate success and are committed to using our megaphone to drive diversity, equity and inclusion. We are the world’s biggest business media brand and we consistently place in the top 20 of the most popular sites in the United States, in good company with brands like Netflix, Apple and Google. In short, we have a big platform and we use it responsibly. Job Description The Data Research Engineering Team is a brand new team with the purpose of managing data from acquisition to presentation, collaborating with other teams while also operating independently. Their responsibilities include acquiring and integrating data, processing and transforming it, managing databases, ensuring data quality, visualizing data, automating processes, working with relevant technologies, and ensuring data governance and compliance. They play a crucial role in enabling data-driven decision-making and meeting the organization's data needs. A typical day in the life of a Data Research Engineer- Team Lead will involve guiding team members through code standards, optimization techniques, and best practices in debugging and testing. They oversee the development and consistent application of testing protocols, including unit, integration, and performance testing, ensuring a high standard of code quality across the team. They work closely with engineers, offering technical mentorship in areas like Git version control, task tracking, and documentation processes, as well as advanced Python and database practices. Responsibilities Technical Mentorship and Code Quality: Guide and mentor team members on coding standards, optimization techniques, and debugging. Conduct thorough code reviews, provide constructive feedback, and enforce code quality standards to ensure maintainable and efficient code. Testing and Quality Assurance Leadership: Develop, implement, and oversee rigorous testing protocols, including unit, integration, and performance testing, to guarantee the reliability and robustness of all projects. Advocate for automated testing and ensure comprehensive test coverage within the team. Process Improvement and Documentation: Establish and maintain high standards for version control, documentation, and task tracking across the team. Continuously refine these processes to enhance team productivity, streamline workflows, and ensure data quality. Hands-On Technical Support: Serve as the team’s primary resource for troubleshooting complex issues, particularly in Python, MySQL, GitKraken, and Knime. Provide on-demand support to team members, helping them overcome technical challenges and improve their problem-solving skills. High-Level Technical Mentorship: Provide mentorship in advanced technical areas, including architecture design, data engineering best practices, and advanced Python programming. Guide the team in building scalable and reliable data solutions. Cross-Functional Collaboration: Work closely with data scientists, product managers, and quality assurance teams to align on data requirements, testing protocols, and process improvements. Foster open communication across teams to ensure seamless integration and delivery of data solutions. Continuous Learning and Improvement: Stay updated with emerging data engineering methodologies and best practices, sharing relevant insights with the team. Drive a culture of continuous improvement, ensuring the team’s skills and processes evolve with industry standards. Data Pipelines: Design, implement, and maintain scalable data pipelines for efficient data transfer, cleaning, normalization, transformation, aggregation, and visualization to support production-level workloads. Big Data: Leverage distributed processing frameworks such as PySpark and Kafka to manage and process massive datasets efficiently. Cloud-Native Data Solutions: Develop and optimize workflows for cloud-native data solutions, including BigQuery, Databricks, Snowflake, Redshift, and tools like Airflow and AWS Glue. Regulations: Ensure compliance with regulatory frameworks like GDPR and implement robust data governance and security measures. Skills and Experience Experience : 8 + years Technical Proficiency: Programming: Expert-level skills in Python, with a strong understanding of code optimization, debugging, and testing. Object-Oriented Programming (OOP) Expertise: Strong knowledge of OOP principles in Python, with the ability to design modular, reusable, and efficient code structures. Experience in implementing OOP best practices to enhance code organization and maintainability. Data Management: Proficient in MySQL and database design, with experience in creating efficient data pipelines and workflows. Tools: Advanced knowledge of Git and GitKraken for version control, with experience in task management, ideally on GitHub. Familiarity with Knime or similar data processing tools is a plus. Testing and QA Expertise: Proven experience in designing and implementing testing protocols, including unit, integration, and performance testing. Ability to embed automated testing within development workflows. Process-Driven Mindset: Strong experience with process improvement and documentation, particularly for coding standards, task tracking, and data management protocols. Leadership and Mentorship: Demonstrated ability to mentor and support junior and mid-level engineers, with a focus on fostering technical growth and improving team cohesion. Experience leading code reviews and guiding team members in problem-solving and troubleshooting. Problem-Solving Skills: Ability to handle complex technical issues and serve as a key resource for team troubleshooting. Expertise in guiding others through debugging and technical problem-solving. Strong Communication Skills: Effective communicator capable of aligning cross-functional teams on project requirements, technical standards, and data workflows. Adaptability and Continuous Learning: A commitment to staying updated with the latest in data engineering, coding practices, and tools, with a proactive approach to learning and sharing knowledge within the team. Data Pipelines: Comprehensive expertise in building and optimizing data pipelines, including data transfer, transformation, and visualization, for real-world applications. Distributed Systems: Strong knowledge of distributed systems and big data tools such as PySpark and Kafka. Data Warehousing: Proficiency with modern cloud data warehousing platforms (BigQuery, Databricks, Snowflake, Redshift) and orchestration tools (Airflow, AWS Glue). Regulations: Demonstrated understanding of regulatory compliance requirements (e.g., GDPR) and best practices for data governance and security in enterprise settings Perks: Day off on the 3rd Friday of every month (one long weekend each month) Monthly Wellness Reimbursement Program to promote health well-being Monthly Office Commutation Reimbursement Program Paid paternity and maternity leaves Qualifications Educational Background: Bachelor’s or Master’s degree in Computer Science, Data Engineering, or a related field. Equivalent experience in data engineering roles will also be considered. Additional Information All your information will be kept confidential according to EEO guidelines.
Posted 1 week ago
2.0 - 3.0 years
7 Lacs
Mumbai
On-site
Job Title: Tableau Developer Experience: 2-3 Years Location: Mumbai, India About the Role: We are seeking a highly motivated and skilled Tableau Developer with years of proven experience to join our dynamic team in Mumbai. In this role, you will be instrumental in transforming complex data into insightful and interactive dashboards and reports using Tableau. You will work closely with business stakeholders, data analysts, and other technical teams to understand reporting requirements, develop effective data visualizations, and contribute to data-driven decision-making within the organization. Roles and Responsibilities: Dashboard Development: Design, develop, and maintain compelling and interactive Tableau dashboards and reports that meet business requirements and enhance user experience. Create various types of visualizations, including charts, graphs, maps, and tables, to effectively communicate data insights. Implement advanced Tableau features such as calculated fields, parameters, sets, groups, and Level of Detail (LOD) expressions to create sophisticated analytics. Optimize Tableau dashboards for performance and scalability, ensuring quick loading times and efficient data retrieval. Data Sourcing and Preparation: Connect to various data sources (e.g., SQL Server, Oracle, Excel, cloud-based data platforms like AWS Redshift, Google BigQuery, etc.) and extract, transform, and load (ETL) data for reporting purposes. Perform data analysis, validation, and cleansing to ensure the accuracy, completeness, and consistency of data used in reports. Collaborate with data engineers and data analysts to understand data structures, identify data gaps, and ensure data quality. Requirements Gathering & Collaboration: Work closely with business users, stakeholders, and cross-functional teams to gather and understand reporting and analytical requirements. Translate business needs into technical specifications and develop effective visualization solutions. Participate in discussions and workshops to refine requirements and propose innovative reporting approaches. Troubleshooting and Support: Diagnose and resolve issues related to data accuracy, dashboard performance, and report functionality. Provide ongoing support and maintenance for existing Tableau dashboards and reports. Assist end-users with Tableau-related queries and provide training as needed. Documentation and Best Practices: Create and maintain comprehensive documentation for Tableau dashboards, data sources, and development processes. Adhere to data visualization best practices and design principles to ensure consistency and usability across all reports. Contribute to code reviews and knowledge sharing within the team. Continuous Improvement: Stay up-to-date with the latest Tableau features, updates, and industry trends in data visualization and business intelligence. Proactively identify opportunities for improvement in existing reports and propose enhancements. Participate in an Agile development environment, adapting to changing priorities and contributing to sprint goals. Required Skills and Qualifications: Bachelor's degree in Computer Science, Information Systems, Data Science, or a related field. 2 years of hands-on experience as a Tableau Developer , with a strong portfolio of developed dashboards and reports. Proficiency in Tableau Desktop and Tableau Server (including publishing, managing permissions, and performance monitoring). Strong SQL skills for data extraction, manipulation, and querying from various databases. Solid understanding of data warehousing concepts, relational databases, and ETL processes. Familiarity with data visualization best practices and design principles. Excellent analytical and problem-solving skills with a keen eye for detail. Strong communication skills (verbal and written) with the ability to explain complex data insights to non-technical stakeholders. Ability to work independently and collaboratively in a team-oriented environment. Adaptability to changing business requirements and a fast-paced environment. Additional Qualifications: Experience with other BI tools (e.g., Power BI, Qlik Sense) is a plus. Familiarity with scripting languages like Python or R for advanced data manipulation and analytics. Knowledge of cloud data platforms (e.g., AWS, Azure, GCP). Experience with Tableau Prep for data preparation. Job Types: Full-time, Permanent Pay: Up to ₹750,000.00 per year Benefits: Health insurance Provident Fund Schedule: Day shift Monday to Friday Work Location: In person
Posted 1 week ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
20312 Jobs | Dublin
Wipro
11977 Jobs | Bengaluru
EY
8165 Jobs | London
Accenture in India
6667 Jobs | Dublin 2
Uplers
6464 Jobs | Ahmedabad
Amazon
6352 Jobs | Seattle,WA
Oracle
5993 Jobs | Redwood City
IBM
5803 Jobs | Armonk
Capgemini
3897 Jobs | Paris,France
Tata Consultancy Services
3776 Jobs | Thane