Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
6.0 - 10.0 years
7 - 8 Lacs
Gurgaon
On-site
You are as unique as your background, experience and point of view. Here, you’ll be encouraged, empowered and challenged to be your best self. You'll work with dynamic colleagues - experts in their fields - who are eager to share their knowledge with you. Your leaders will inspire and help you reach your potential and soar to new heights. Every day, you'll have new and exciting opportunities to make life brighter for our Clients - who are at the heart of everything we do. Discover how you can make a difference in the lives of individuals, families and communities around the world. Job Description: Role Overview: We are looking for an experienced and business-savvy Business Analyst to join our Data & Analytics team, primarily supporting Data Warehousing and BI reporting projects in the Life Insurance domain. This role will serve as the key bridge between business stakeholders and the technical team (Data Modelers, BI Developers, Data Engineers), ensuring that reporting and analytics solutions are aligned with business objectives. Key Responsibilities: Engage with business users to gather, analyze, and document requirements related to data, reporting, and dashboards Define and validate KPIs, metrics, dimensions, and reporting logic with business users Translate business needs into structured requirement documents for DWH and BI teams Support data modelers by providing business rules, field definitions, and reporting context Act as a SME (Subject Matter Expert) in Life Insurance reporting for internal and external stakeholders Participate in UAT for reports/dashboards and support defect triaging Collaborate on post-go-live reporting enhancement and BAU reporting demand Required Skills: 6–10 years of experience as a Business Analyst in Data Warehousing or BI projects Strong understanding of BI and reporting platforms (e.g., Power BI, Tableau, Qlik, etc.) Familiarity with data modeling concepts (Star Schema, Snowflake, SCD, etc.) Experience working with Life Insurance data (Policy, Premium, Commission, Claims, etc.) Excellent requirement gathering, documentation, and stakeholder communication skills Ability to work with cross-functional teams in Agile/Iterative environments Nice to Have: Exposure to Data Vault 2.0 methodology Basic knowledge of SQL and data profiling Experience working with cloud-based DWHs (AWS Redshift, Snowflake, etc.) Job Category: Advanced Analytics Posting End Date: 17/08/2025
Posted 1 week ago
2.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Description Come help Amazon create cutting-edge data and science-driven technologies for delivering packages to the doorstep of our customers! The Last Mile Routing & Planning organization builds the software, algorithms and tools that make the “magic” of home delivery happen: our flow, sort, dispatch and routing intelligence systems are responsible for the billions of daily decisions needed to plan and execute safe, efficient and frustration-free routes for drivers around the world. Our team supports deliveries (and pickups!) for Amazon Logistics, Prime Now, Amazon Flex, Amazon Fresh, Lockers, and other new initiatives. As part of the Last Mile Science & Technology organization, you’ll partner closely with Product Managers, Data Scientists, and Software Engineers to drive improvements in Amazon's Last Mile delivery network. You will leverage data and analytics to generate insights that accelerate the scale, efficiency, and quality of the routes we build for our drivers through our end-to-end last mile planning systems. You will present your analyses, plans, and recommendations to senior leadership and connect new ideas to drive change. Analytical ingenuity and leadership, business acumen, effective communication capabilities, and the ability to work effectively with cross-functional teams in a fast paced environment are critical skills for this role. Responsibilities Create actionable business insights through analytical and statistical rigor to answer business questions, drive business decisions, and develop recommendations to improve operations Collaborate with Product Managers, software engineering, data science, and data engineering partners to design and develop analytic capabilities Define and govern key business metrics, build automated dashboards and analytic self-service capabilities, and engineer data-driven processes that drive business value Navigate ambiguity to develop analytic solutions and shape work for junior team members Basic Qualifications 2+ years of analyzing and interpreting data with Redshift, Oracle, NoSQL etc. experience Experience with data visualization using Tableau, Quicksight, or similar tools Experience with one or more industry analytics visualization tools (e.g. Excel, Tableau, QuickSight, MicroStrategy, PowerBI) and statistical methods (e.g. t-test, Chi-squared) Experience with scripting language (e.g., Python, Java, or R) Preferred Qualifications Master's degree, or Advanced technical degree Knowledge of data modeling and data pipeline design Experience with statistical analysis, co-relation analysis Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner. Company - ADCI - Karnataka Job ID: A2994626
Posted 1 week ago
4.0 years
6 - 9 Lacs
Chennai
Remote
Logitech is the Sweet Spot for people who want their actions to have a positive global impact while having the flexibility to do it in their own way. Job Description The Role: We are looking for a candidate to join our team who will be involved in the ongoing development of our Enterprise Data Warehouse (EDW) and supporting our POS [Point of Sale] Channel Data Management team. This role will include participating in the loading and extraction of data, including POS to and from the warehouse. The ideal candidate will be involved in all stages of the project lifecycle, from initial planning through to deployment in production. A key focus of the role will be data analysis, data modeling, and ensuring these aspects are successfully implemented in the production environment. Your Contribution: Be Yourself. Be Open. Stay Hungry and Humble. Collaborate. Challenge. Decide and just Do. These are the behaviors you’ll need for success at Logitech. In this role you will: Design, Develop, document, and test ETL solutions using industry standard tools. Ability to design Physical and Reporting Data models for seamless cross-functional and cross-systems data reporting. Enhance point-of-sale datasets with additional data points to provide stakeholders with useful insights. Ensure data integrity by rigorously validating and reconciling data obtained from third-party providers. Collaborate with data providers and internal teams to address customer data discrepancies and enhance data quality. Work closely across our D&I teams to deliver datasets optimized for consumption in reporting and visualization tools like Tableau Collaborate with channel data and cross-functional teams to define requirements for POS and MDM data flows. Support Customer MDM & POS Adhoc Requests and Data Clarification from the Channel Data Team and the Finance Team. Collaborate with the BIOPS team to support Quarter-end user activities and ensure compliance with SOX regulations. Should be willing to explore and learn new technologies and concepts to provide the right kind of solution. Key Qualifications: For consideration, you must bring the following minimum skills and behaviors to our team: A total of 4 to 7 years of experience in ETL design, development, and populating data warehouses. This includes experience with heterogeneous OLTP sources such as Oracle R12 ERP systems and other cloud technologies. At least 3 years of hands-on experience with Pentaho Data Integration or similar ETL tools. Practical experience working with cloud-based Data Warehouses such as Snowflake and Redshift. Significant hands-on experience with Snowflake utilities, including SnowSQL, SnowPipe, Python, Tasks, Streams, Time Travel, Optimizer, Metadata Manager, data sharing, and stored procedures. Comprehensive expertise in databases, data acquisition, ETL strategies, and the tools and technologies within Pentaho DI and Snowflake. Demonstrated experience in designing complex ETL processes for extracting data from various sources, including XML files, JSON, RDBMS, and flat files. Exposure to standard support ticket management tools. A strong understanding of Business Intelligence and Data warehousing concepts and methodologies. Extensive experience in data analysis and root cause analysis, along with proven problem-solving and analytical thinking capabilities. A solid understanding of software engineering principles and proficiency in working with Unix/Linux/Windows operating systems, version control, and office software. A deep understanding of data warehousing principles and cloud architecture, including SQL optimization techniques for building efficient and scalable data systems. Familiarity with Snowflake’s unique features, such as its multi-cluster architecture and shareable data capabilities. Excellent skills in writing and optimizing SQL queries to ensure high performance and data accuracy across all systems. The ability to troubleshoot and resolve data quality issues promptly, maintaining data integrity and reliability. Strong communication skills are essential for effective collaboration with both technical and non-technical teams to ensure a clear understanding of data engineering requirements. In addition, preferable skills and behaviors include: Exposure to Oracle ERP environment, Basic understanding of Reporting tools like OBIEE, Tableau Education: BS/BTech/MS in computer science Information Systems or a related technical field or equivalent industry expertise. Logitech is the sweet spot for people who are passionate about products, making a mark, and having fun doing it. As a company, we’re small and flexible enough for every person to take initiative and make things happen. But we’re big enough in our portfolio, and reach, for those actions to have a global impact. That’s a pretty sweet spot to be in and we’re always striving to keep it that way. Across Logitech we empower collaboration and foster play. We help teams collaborate/learn from anywhere, without compromising on productivity or continuity so it should be no surprise that most of our jobs are open to work from home from most locations. Our hybrid work model allows some employees to work remotely while others work on-premises. Within this structure, you may have teams or departments split between working remotely and working in-house. Logitech is an amazing place to work because it is full of authentic people who are inclusive by nature as well as by design. Being a global company, we value our diversity and celebrate all our differences. Don’t meet every single requirement? Not a problem. If you feel you are the right candidate for the opportunity, we strongly recommend that you apply. We want to meet you! We offer comprehensive and competitive benefits packages and working environments that are designed to be flexible and help you to care for yourself and your loved ones, now and in the future. We believe that good health means more than getting medical care when you need it. Logitech supports a culture that encourages individuals to achieve good physical, financial, emotional, intellectual and social wellbeing so we all can create, achieve and enjoy more and support our families. We can’t wait to tell you more about them being that there are too many to list here and they vary based on location. All qualified applicants will receive consideration for employment without regard to race, sex, age, color, religion, sexual orientation, gender identity, national origin, protected veteran status, or on the basis of disability. If you require an accommodation to complete any part of the application process, are limited in the ability, are unable to access or use this online application process and need an alternative method for applying, you may contact us toll free at +1-510-713-4866 for assistance and we will get back to you as soon as possible.
Posted 1 week ago
2.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Description Come help Amazon create cutting-edge data and science-driven technologies for delivering packages to the doorstep of our customers! The Last Mile Routing & Planning organization builds the software, algorithms and tools that make the “magic” of home delivery happen: our flow, sort, dispatch and routing intelligence systems are responsible for the billions of daily decisions needed to plan and execute safe, efficient and frustration-free routes for drivers around the world. Our team supports deliveries (and pickups!) for Amazon Logistics, Prime Now, Amazon Flex, Amazon Fresh, Lockers, and other new initiatives. As part of the Last Mile Science & Technology organization, you’ll partner closely with Product Managers, Data Scientists, and Software Engineers to drive improvements in Amazon's Last Mile delivery network. You will leverage data and analytics to generate insights that accelerate the scale, efficiency, and quality of the routes we build for our drivers through our end-to-end last mile planning systems. You will present your analyses, plans, and recommendations to senior leadership and connect new ideas to drive change. Analytical ingenuity and leadership, business acumen, effective communication capabilities, and the ability to work effectively with cross-functional teams in a fast paced environment are critical skills for this role. Responsibilities Create actionable business insights through analytical and statistical rigor to answer business questions, drive business decisions, and develop recommendations to improve operations Collaborate with Product Managers, software engineering, data science, and data engineering partners to design and develop analytic capabilities Define and govern key business metrics, build automated dashboards and analytic self-service capabilities, and engineer data-driven processes that drive business value Navigate ambiguity to develop analytic solutions and shape work for junior team members Basic Qualifications 2+ years of analyzing and interpreting data with Redshift, Oracle, NoSQL etc. experience Experience with data visualization using Tableau, Quicksight, or similar tools Experience with one or more industry analytics visualization tools (e.g. Excel, Tableau, QuickSight, MicroStrategy, PowerBI) and statistical methods (e.g. t-test, Chi-squared) Experience with scripting language (e.g., Python, Java, or R) Preferred Qualifications Master's degree, or Advanced technical degree Knowledge of data modeling and data pipeline design Experience with statistical analysis, co-relation analysis Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner. Company - ADCI - Karnataka Job ID: A2994635
Posted 1 week ago
4.0 years
4 - 6 Lacs
Noida
On-site
Required Experience 4 - 12 Years Skills Pinot, AWS Glue, aws redshift + 4 more Job Description – Senior Data Engineer We at Pine Labs are looking for those who share our core belief - “Every Day is Game day”. We bring our best selves to work each day to realize our mission of enriching the world through the power of digital commerce and financial services. Role Purpose We are looking for skilled Senior Data Engineers with 4-12 years of experience to join our growing team. You will design, build, and optimize real-time and batch data pipelines, leveraging AWS cloud technologies and Apache Pinot to enable high-performance analytics for our business. This role is ideal for engineers who are passionate about working with large-scale data and real-time processing. Responsibilities we entrust you with Data Pipeline Development: Build and maintain robust ETL/ELT pipelines for batch and streaming data using tools like Apache Spark, Apache Flink, or AWS Glue. Develop real-time ingestion pipelines into Apache Pinot using streaming platforms like Kafka or Kinesis. Real-Time Analytics: Configure and optimize Apache Pinot clusters for sub-second query performance and high availability. Design indexing strategies and schema structures to support real-time and historical data use cases. Cloud Infrastructure Management: Work extensively with AWS services such as S3, Redshift, Kinesis, Lambda, DynamoDB, and CloudFormation to create scalable, cost-effective solutions. Implement infrastructure as code (IaC) using tools like Terraform or AWS CDK. Performance Optimization: Optimize data pipelines and queries to handle high throughput and large-scale data efficiently. Monitor and tune Apache Pinot and AWS components to achieve peak performance. Data Governance & Security: Ensure data integrity, security, and compliance with organizational and regulatory standards (e.g., GDPR, SOC2). Implement data lineage, access controls, and auditing mechanisms. Collaboration: Work closely with data scientists, analysts, and other engineers to translate business requirements into technical solutions. Collaborate in an Agile environment, participating in sprints, standups, and retrospectives. Relevant work experience 4-12 years of hands-on experience in data engineering or related roles. Proven expertise with AWS services and real-time analytics platforms like Apache Pinot or similar technologies (e.g., Druid, ClickHouse). Proficiency in Python, Java, or Scala for data processing and pipeline development. Strong SQL skills and experience with both relational and NoSQL databases. Hands-on experience with streaming platforms such as Apache Kafka or AWS Kinesis. Familiarity with big data tools like Apache Spark, Flink, or Airflow. Strong problem-solving skills and a proactive approach to challenges. Excellent communication and collaboration abilities in cross-functional teams. Preferred Qualifications: Experience with data lakehouse architectures (e.g., Delta Lake, Iceberg). Knowledge of containerization and orchestration tools (e.g., Docker, Kubernetes). Exposure to monitoring tools like Prometheus, Grafana, or CloudWatch. Familiarity with data visualization tools like Tableau or Superset. What We Offer: Competitive compensation based on experience. Flexible work environment with opportunities for growth. Work on cutting-edge technologies and projects in data engineering and analytics. What we Value in Our people You take the shot: You Decide Fast and You Deliver Right You are the CEO of what you do: you show ownership and make things happen You own tomorrow: by building solutions for the merchants and doing the right thing You sign your work like an artist: You seek to learn and take pride in the work you do
Posted 1 week ago
7.0 - 10.0 years
0 Lacs
Noida
On-site
Lead Data Engineer Noida Company Intro: Binmile is a global fast-growing outsourced IT Services Company, with a culture that is passionate about innovation and automation. Our mission is to create an extraordinary impact on the world through our culture and digital technology excellence. Binmile combines agility and speed of implementation to tailor innovative future-focused solutions in Software Product Engineering; all fueled by AI and automation. Key Responsibilities: Lead the design and development of scalable data pipelines and architectures. Mentor and guide a team of data engineers, ensuring efficient delivery of data solutions. Collaborate with data scientists, analysts, and cross-functional stakeholders to understand data needs and translate them into technical solutions. Implement best practices in data engineering, including data quality, data governance, and performance optimization. Develop and maintain robust ETL/ELT workflows using modern tools and frameworks. • Work extensively with AWS cloud services (e.g., S3, Redshift, Glue, Lambda, EMR) for data storage, processing, and orchestration. Write clean, scalable, and maintainable code in Python and Scala. Optimize SQL queries and manage data warehousing solutions. Ensure data security, compliance, and privacy standards are met. Required Skills & Qualifications: Expert-level coding skills in Python and Scala. 7–10 years of hands-on experience in data engineering and leading data teams. Strong proficiency in AWS cloud services for data management and analytics. Advanced knowledge of SQL for data manipulation and query optimization. Proven experience in designing and maintaining ETL/ELT pipelines. Familiarity with modern data platforms like Snowflake, Databricks, or similar is a plus. Solid understanding of data modeling, data warehousing, and big data technologies. Excellent problem-solving skills and a collaborative mindset. Bachelor’s or Master’s degree in Computer Science, Engineering, or a related field. Top Benefits and Perks: As a Binmillier, you’ll enjoy: Opportunity to work with the Leadership team Health Insurance Flexible working structure Binmile is an Equal Employment Opportunity Employer. We celebrate diversity and are committed to building a team that represents various backgrounds, perspectives, and skills
Posted 1 week ago
7.0 years
3 - 9 Lacs
India
On-site
Job Title: Technical Lead – Python & AWS Experience: 7+ Years Domain: US Healthcare IT Notice Period: Immediate Only Role Overview: We are seeking a Technical Lead (Python/AWS) to lead backend architecture and development of scalable, secure, and high-performance microservices-based systems. The ideal candidate will have strong hands-on skills in Python (Flask), AWS (Lambda, Redshift, Glue, S3), RDBMS optimization, and CI/CD automation. Key Responsibilities: Architect and lead backend systems for scalability and uptime. Build and maintain micro-services & REST APIs with Flask. Design optimized RDBMS/SQL databases (PostgreSQL, Aurora). Manage production-grade AWS deployments (Lambda, Glue, Redshift, S3). Drive CI/CD using GitHub Actions, Docker, Terraform, Kubernetes. Mentor and lead developers, enforce coding standards and reviews. Implement logging, monitoring (CloudWatch, Prometheus, ELK). Align development with business goals and estimation accuracy. Ensure security, compliance, and performance optimization. Must-Have Skills: Python (Flask), Celery AWS Services: Lambda, Glue, Redshift, S3 Microservices & API Gateway, distributed systems SQL, PostgreSQL, performance tuning CI/CD: GitHub Actions, Docker, Kubernetes, Terraform Monitoring: ELK, Cloud Watch, Prometheus Agile/Scrum and excellent communication Preferred Experience: Background in high-scale product companies with focus on uptime and defect prevention Deep exposure to US Healthcare domain is a plus Experience in leading and mentoring distributed engineering teams Job Type: Full-time Pay: ₹30,000.00 - ₹80,000.00 per month Benefits: Cell phone reimbursement Paid sick time Location Type: In-person Schedule: Day shift Fixed shift Morning shift Application Question(s): What is your notice period ( In Days ) ? Work Location: In person Expected Start Date: 07/07/2025
Posted 1 week ago
0 years
0 Lacs
Andhra Pradesh
On-site
BI-Tableau desktop, Tableau Reports and Dashboard Design, Data visualization and analysis, Tableau Server, Tableau Reader, Cognos Report Studio, Query Studio, Cognos Connection is a plus Languages- SQL, PL SQL, T-SQL, SQL Plus, SAS Base a plus Perform complex MS excel operations Pivot table, filter operations on the underlying data Knowledge of reporting tools like Qlik Sense, Qlik view and statistical tools like Advanced Excel (v-lookup, charts, dashboard design), Visual Basic using visual studio, MS Access is a plus Possess ability for critical thinking, analysis, good interpersonal and communication skills. Ability to adapt and learn new technologies and get quickly proficient with them. Data mining experience Blended data from multiple resources like flat files, Excel, Oracle, and Tableau server environment Used Cloud sources like Amazon AWS Redshift, Snowflake, Google Drive, MS Excel, Oracle About Virtusa Teamwork, quality of life, professional and personal development: values that Virtusa is proud to embody. When you join us, you join a team of 27,000 people globally that cares about your growth — one that seeks to provide you with exciting projects, opportunities and work with state of the art technologies throughout your career with us. Great minds, great potential: it all comes together at Virtusa. We value collaboration and the team environment of our company, and seek to provide great minds with a dynamic place to nurture new ideas and foster excellence. Virtusa was founded on principles of equal opportunity for all, and so does not discriminate on the basis of race, religion, color, sex, gender identity, sexual orientation, age, non-disqualifying physical or mental disability, national origin, veteran status or any other basis covered by appropriate law. All employment is decided on the basis of qualifications, merit, and business need.
Posted 1 week ago
8.0 years
0 Lacs
India
Remote
About Juniper Square Our mission is to unlock the full potential of private markets. Privately owned assets like commercial real estate, private equity, and venture capital make up half of our financial ecosystem yet remain inaccessible to most people. We are digitizing these markets, and as a result, bringing efficiency, transparency, and access to one of the most productive corners of our financial ecosystem. If you care about making the world a better place by making markets work better through technology – all while contributing as a member of a values-driven organization – we want to hear from you. Juniper Square offers employees a variety of ways to work, ranging from a fully remote experience to working full-time in one of our physical offices. We invest heavily in digital-first operations, allowing our teams to collaborate effectively across 27 U.S. states, 2 Canadian Provinces, India, Luxembourg, and England. We also have a physical offices in San Francisco, New York City, Mumbai and Bangalore for employees who prefer to work in an office some or all of the time. About Your Role Juniper Square is growing rapidly, and our data needs are growing even faster, so we’re growing our Data Engineering Team. As a Senior Data Engineer your role will be pivotal to evolving our existing data and reporting experiences. You’ll build out pipelines to gather data from multiple sources and make it available for analysis. You will shape both internal and external analytics products to help guide business-critical decisions, enhance their workflows, and improve decision-making. What You’ll Do Design and implement sophisticated data models in SQL. Work closely with the other Software Engineers to ensure sound, scalable implementation. Act as a technical expert on our team regarding all things data, especially as the data team grows and evolves. Introduce new technologies to evolve and enhance our data pipeline capabilities. Document data models, architectural decisions and data dictionaries to enable collaboration, maintainability and usability of our analytics platforms and code. Assist with governance, guidance, code reviews, and access controls so that we maintain consistency, quality, and business confidentiality as we scale analytics access across the company and to customers. Externally: learn our application data schema, and develop a fluency in how to transform it to enhance customer’s decision-making with data. Internally: guide product and development teams, advising on instrumentation and laying development foundations for product usage reporting. Fulfill projects with minimal guidance but with an appropriate sense of when and how to collaborate with others. Build scalable, highly performant infrastructure for delivering clear business insights from a variety of raw data sources. Qualifications Bachelor's degree in Computer Science, or equivalent work experience 8+ years of experience building ETL (Extraction Transform Load) or ELT (Extraction Load Transform) pipelines from scratch Strong command of relational databases (Postgresql preferred), data modeling and database design Strong command of Python and experience building production web applications using Python Experience with cloud based services (AWS RDS preferred) Experience developing on (or administering) BI / data visualization platforms (ex. Looker, Tableau, PowerBI, Mode, Data Studio, Domo, QlikView etc.). Basic understanding of data warehouses such as Amazon Redshift, Google BigQuery, Snowflake etc. Demonstrated history of translating data into clear and actionable narratives and communicating opportunities and challenges relevant to stakeholders. You must be flexible and adaptable – you will be operating in a fast-paced startup environment. At Juniper Square, we believe building a diverse workforce and an inclusive culture makes us a better company. If you think this job sounds like a fit, we encourage you to apply even if you don’t meet all the qualifications
Posted 1 week ago
9.0 years
0 Lacs
Andhra Pradesh, India
On-site
Data Engineer Must have 9+ years of experience in below mentioned skills. Must Have: Big Data Concepts Python(Core Python- Able to write code), SQL, Shell Scripting, AWS S3 Good to Have: Event-driven/AWA SQS, Microservices, API Development,Kafka, Kubernetes, Argo, Amazon Redshift, Amazon Aurora
Posted 1 week ago
8.0 years
0 Lacs
Pune, Maharashtra, India
Remote
ZS is a place where passion changes lives. As a management consulting and technology firm focused on improving life and how we live it, our most valuable asset is our people. Here you’ll work side-by-side with a powerful collective of thinkers and experts shaping life-changing solutions for patients, caregivers and consumers, worldwide. ZSers drive impact by bringing a client first mentality to each and every engagement. We partner collaboratively with our clients to develop custom solutions and technology products that create value and deliver company results across critical areas of their business. Bring your curiosity for learning; bold ideas; courage and passion to drive life-changing impact to ZS. Our most valuable asset is our people . At ZS we honor the visible and invisible elements of our identities, personal experiences and belief systems—the ones that comprise us as individuals, shape who we are and make us unique. We believe your personal interests, identities, and desire to learn are part of your success here. Learn more about our diversity, equity, and inclusion efforts and the networks ZS supports to assist our ZSers in cultivating community spaces, obtaining the resources they need to thrive, and sharing the messages they are passionate about. What You’ll Do Plan, deploy and manage the testing effort for any given engagement / release; Ability to handle multiple projects and teams; Work closely with developers, business analysts and project managers to define the appropriate testing approach; Define key test processes, best practices, KPIs, collateral; Co-create the test strategies for automation projects for the chosen area of expertise. Implement Automation framework on projects; Train self in at least an area of expertise like but not limited to - Selenium, RPA, AIML, Validated Testing Oversee all aspects of quality assurance including establishing metrics, applying industry best practices, and developing new tools and processes to ensure quality goals are met; Define the scope of testing within the context of each release / delivery; Deploying and managing the appropriate testing framework to meet the testing mandate; Travel to US (as needed) to work with client and provide technology expertise to other ZS teams; Coach, mentor, and conduct training programs in data modeling, data warehousing, and other IT topics. Play active role in developing and growing the practice; Create test deliverables required by company and project testing standards; Identify and lead process improvement and product improvement efforts; Provide domain expertise to the test design and learn the new domains for future assignments; Manage the respective cohort and own expertise build up for the cohort; Manage testing on Validated systems processes (as applicable); Support business development in clusters / client spaces (as applicable); What You’ll Bring Bachelor's/master's degree in Computer Science, Electrical Engineering or other computer-related disciplines with minimum 60% marks throughout the education; 8-10 years of software testing experience in business intelligence domain Experience with end-to-end testing of the ETL process of a data warehouse system; Experience in one or more Big data technologies: Spark Databricks Redshift Azure Informatica cloud GCP Athena etc; Experience with DevOps/CICD/Jenkins will be an added advantage; Hands on experience with reporting and web based application testing will be an added advantage; Hands on experience in one or more automation technologies: Selenium+Java/Python Selenium IDE testproject.IO Python shell scripting VBA will be an added advantage; Experience in developing test plans, as well as translating them into test cases and executing them; In-depth understanding of defect management processes; SQL skills for development of expected results and troubleshooting problems found during testing; Identify test data requirements and generate required data to support testing; Evaluate and analyze application behavior and data for potential software issues; Experience with validated systems will be good to have. Additional Skills: Must be technical, creative, detail-oriented and a strong team player; Knowledge of current data modeling, and data warehouse concepts, issues, practices, methodologies, and trends; Significant experience with analyzing and troubleshooting the interaction between databases, operating systems, and applications; Ability to communicate clearly and effectively in oral and written form; Help with the selling process including writing proposals and giving product demos; Perks & Benefits: ZS offers a comprehensive total rewards package including health and well-being, financial planning, annual leave, personal growth and professional development. Our robust skills development programs, multiple career progression options and internal mobility paths and collaborative culture empowers you to thrive as an individual and global team member. We are committed to giving our employees a flexible and connected way of working. A flexible and connected ZS allows us to combine work from home and on-site presence at clients/ZS offices for the majority of our week. The magic of ZS culture and innovation thrives in both planned and spontaneous face-to-face connections. Travel: Travel is a requirement at ZS for client facing ZSers; business needs of your project and client are the priority. While some projects may be local, all client-facing ZSers should be prepared to travel as needed. Travel provides opportunities to strengthen client relationships, gain diverse experiences, and enhance professional growth by working in different environments and cultures. Considering applying? At ZS, we're building a diverse and inclusive company where people bring their passions to inspire life-changing impact and deliver better outcomes for all. We are most interested in finding the best candidate for the job and recognize the value that candidates with all backgrounds, including non-traditional ones, bring. If you are interested in joining us, we encourage you to apply even if you don't meet 100% of the requirements listed above. ZS is an equal opportunity employer and is committed to providing equal employment and advancement opportunities without regard to any class protected by applicable law. To Complete Your Application: Candidates must possess or be able to obtain work authorization for their intended country of employment.An on-line application, including a full set of transcripts (official or unofficial), is required to be considered. NO AGENCY CALLS, PLEASE. Find Out More At: www.zs.com
Posted 1 week ago
7.0 years
0 Lacs
Pune, Maharashtra, India
Remote
ZS is a place where passion changes lives. As a management consulting and technology firm focused on improving life and how we live it, our most valuable asset is our people. Here you’ll work side-by-side with a powerful collective of thinkers and experts shaping life-changing solutions for patients, caregivers and consumers, worldwide. ZSers drive impact by bringing a client first mentality to each and every engagement. We partner collaboratively with our clients to develop custom solutions and technology products that create value and deliver company results across critical areas of their business. Bring your curiosity for learning; bold ideas; courage and passion to drive life-changing impact to ZS. Our most valuable asset is our people . At ZS we honor the visible and invisible elements of our identities, personal experiences and belief systems—the ones that comprise us as individuals, shape who we are and make us unique. We believe your personal interests, identities, and desire to learn are part of your success here. Learn more about our diversity, equity, and inclusion efforts and the networks ZS supports to assist our ZSers in cultivating community spaces, obtaining the resources they need to thrive, and sharing the messages they are passionate about. What You’ll Do Plan, deploy and manage the testing effort for any given engagement / release; Ability to handle multiple projects and teams; Work closely with developers, business analysts and project managers to define the appropriate testing approach; Define key test processes, best practices, KPIs, collateral; Co-create the test strategies for automation projects for the chosen area of expertise. Implement Automation framework on projects; Train self in at least an area of expertise like but not limited to - Selenium, RPA, AIML, Validated Testing Oversee all aspects of quality assurance including establishing metrics, applying industry best practices, and developing new tools and processes to ensure quality goals are met; Define the scope of testing within the context of each release / delivery; Deploying and managing the appropriate testing framework to meet the testing mandate; Travel to US (as needed) to work with client and provide technology expertise to other ZS teams; Coach, mentor, and conduct training programs in data modeling, data warehousing, and other IT topics. Play active role in developing and growing the practice; Create test deliverables required by company and project testing standards; Identify and lead process improvement and product improvement efforts; Provide domain expertise to the test design and learn the new domains for future assignments; Manage the respective cohort and own expertise build up for the cohort; Manage testing on Validated systems processes (as applicable); Support business development in clusters / client spaces (as applicable); What You’ll Bring Bachelor's/master's degree in Computer Science, Electrical Engineering or other computer-related disciplines with minimum 60% marks throughout the education; 7-9 years of software testing experience in business intelligence domain Experience with end-to-end testing of the ETL process of a data warehouse system; Experience in one or more Big data technologies: Spark Databricks Redshift Azure Informatica cloud GCP Athena etc; Experience with DevOps/CICD/Jenkins will be an added advantage; Hands on experience with reporting and web based application testing will be an added advantage; Hands on experience in one or more automation technologies: Selenium+Java/Python Selenium IDE testproject.IO Python shell scripting VBA will be an added advantage; Experience in developing test plans, as well as translating them into test cases and executing them; In-depth understanding of defect management processes; SQL skills for development of expected results and troubleshooting problems found during testing; Identify test data requirements and generate required data to support testing; Evaluate and analyze application behavior and data for potential software issues; Experience with validated systems will be good to have. Additional Skills: Must be technical, creative, detail-oriented and a strong team player; Knowledge of current data modeling, and data warehouse concepts, issues, practices, methodologies, and trends; Significant experience with analyzing and troubleshooting the interaction between databases, operating systems, and applications; Ability to communicate clearly and effectively in oral and written form; Help with the selling process including writing proposals and giving product demos; Perks & Benefits: ZS offers a comprehensive total rewards package including health and well-being, financial planning, annual leave, personal growth and professional development. Our robust skills development programs, multiple career progression options and internal mobility paths and collaborative culture empowers you to thrive as an individual and global team member. We are committed to giving our employees a flexible and connected way of working. A flexible and connected ZS allows us to combine work from home and on-site presence at clients/ZS offices for the majority of our week. The magic of ZS culture and innovation thrives in both planned and spontaneous face-to-face connections. Travel: Travel is a requirement at ZS for client facing ZSers; business needs of your project and client are the priority. While some projects may be local, all client-facing ZSers should be prepared to travel as needed. Travel provides opportunities to strengthen client relationships, gain diverse experiences, and enhance professional growth by working in different environments and cultures. Considering applying? At ZS, we're building a diverse and inclusive company where people bring their passions to inspire life-changing impact and deliver better outcomes for all. We are most interested in finding the best candidate for the job and recognize the value that candidates with all backgrounds, including non-traditional ones, bring. If you are interested in joining us, we encourage you to apply even if you don't meet 100% of the requirements listed above. ZS is an equal opportunity employer and is committed to providing equal employment and advancement opportunities without regard to any class protected by applicable law. To Complete Your Application: Candidates must possess or be able to obtain work authorization for their intended country of employment.An on-line application, including a full set of transcripts (official or unofficial), is required to be considered. NO AGENCY CALLS, PLEASE. Find Out More At: www.zs.com
Posted 1 week ago
5.0 years
0 Lacs
Kolkata, West Bengal, India
On-site
Role: AWS Data Engineer Required Technical Skill Set: AWS Data Engineer Desired Experience Range: 3-5 yrs Location of Requirement: Kolkata Notice period: Immediately We are currently planning to do a Virtual Interview on 05 – July – 2025 (Saturday) Interview Date: 05 – July – 2025 (Saturday) Job Description: Must-Have: 3 – 5 years of experience in Data Engineering, with at least 2+ years working on AWS. Strong foundation in building scalable data pipelines on AWS, excellent SQL skills, and hands-on experience with modern data processing frameworks. Proficient in AWS services : S3, Glue, Lambda, Redshift, Athena, IAM, CloudWatch, Step Functions. Strong programming/scripting skills in Python or Scala . Solid understanding of SQL and experience in data modeling and performance tuning. Experience in processing large-scale datasets (batch and streaming). Familiarity with data versioning , logging , and orchestration tools (e.g., Airflow, AWS Step Functions). Good-to-Have: Experience with Snowflake , Databricks , or Apache Spark . Familiarity with Terraform or CloudFormation for infrastructure as code. Experience with CI/CD pipelines and version control (Git). Knowledge of data governance frameworks (e.g., Apache Atlas, AWS Lake Formation). Exposure to Kafka , Kinesis , or other streaming platforms. Knowledge of containerization using Docker and Kubernetes (EKS preferred). Understanding data privacy regulations like GDPR, HIPAA, etc.
Posted 1 week ago
4.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
About Inspire Brands: Inspire Brands is disrupting the restaurant industry through digital transformation and operational efficiencies. The company’s technology hub, Inspire Brands Hyderabad Support Center, India, will lead technology innovation and product development for the organization and its portfolio of distinct brands. The Inspire Brands Hyderabad Support Center will focus on developing new capabilities in data science, data analytics, eCommerce, automation, cloud computing, and information security to accelerate the company’s business strategy. Inspire Brands Hyderabad Support Center will also host an innovation lab and collaborate with start-ups to develop solutions for productivity optimization, workforce management, loyalty management, payments systems, and more. We are looking for a Lead Data Engineer for the Enterprise Data Organization to design, build and manage data pipelines (Data ingestion, data transformation, data distribution, quality rules, data storage etc.) for Azure cloud-based data platform. The candidate will require to possess strong technical, analytical, programming and critical thinking skills. Duties and Responsibilities: Work closely with Product owners/managers to understand the business needs Collaborate with architects, data modelers and other team members to design technical details of data engineering jobs/processes to fulfillment business needs Lead sessions with data engineers and provide necessary guidance to solve technical challenges Develop new and/or enhance existing data processing (Data Ingest, Data Transformation, Data Store, Data Management, Data Quality) components Actively contribute towards setting up and refining data engineering best practices Help data engineers adhere to coding standards, best practices, etc. and produce production deployable code that is robust, scalable and reusable Support and troubleshoot the data environments, tune performance, etc. Document technical artifacts for developed solutions Good interpersonal skills; comfort and competence in dealing with different teams within the organization. Requires an ability to interface with multiple constituent groups and build sustainable relationships. Versatile, creative temperament, ability to think out-of-the box while defining sound and practical solutions. Ability to master new skills Familiar with Agile practices and methodologies Education Requirements Min / Preferred Education Level Description: Minimum 4 Year / Bachelor’s Degree A bachelor's degree in computer science, data science, information science or related field, or equivalent work Years Of Experience Minimum Years of Experience Maximum Years of Experience Comments: 8-10 years of experience in a Data Engineering role Knowledge, Skills, and Abilities: Advanced SQL queries, scripts, stored procedures, materialized views, and views (5+ yrs experience) Focus on ELT to load data into database and perform transformations in database (5+ yrs experience) Ability to use analytical SQL functions Snowflake experience a plus. Experience building dimensional Data marts, Data lakes and warehouses. (5+ yrs experience) Cloud Data Warehouse solutions experience (Snowflake, Azure DW, or Redshift); data modeling, analysis, programming – (3+ years' experience with one or more cloud platforms) Experience with DevOps models utilizing a CI/CD tool (2+ yrs experience) Work in hands-on Cloud environment in Azure Cloud Platform (ADLS, Blob) Talend, Apache Airflow or TWS, Azure Data Factory, and BI tools like Tableau preferred (3+ years experience with one or more) Analyze data models Equal Employment Opportunity Policy EEO-1 Statement It is the policy of Inspire Brands Inc.™ (“IRB” or the “Company”) to treat all employees and applicants for employment fairly and to provide equal employment opportunities without regard to race, color, sex, religion, national original or ancestry, ethnicity, sexual orientation, gender identity, age, disability, genetic information, citizenship, military service or veteran status, marital status or any other characteristic protected under applicable federal, state, or local law. This policy applies to all employment practices including recruiting, hiring, placement, pay, promotions, transfers, training, leaves of absence, and termination. Inspire Brands, Inc. expressly prohibits any form of unlawful employment harassment based on race, color, sex, religion, national original or ancestry, ethnicity, sexual orientation, gender identity, age, disability, genetic information, citizenship, military service or veteran status, marital status or any other characteristic protected under applicable federal, state, or local law. Improper interference with the ability of IRB’s employees to perform their expected job duties will not be tolerated.
Posted 1 week ago
2.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Role: Data Engineer Primary Skill: Python, Pyspark Mandatory and AWS Services and pipelines. Location: Hyderabad/ Pune/ Coimbatore Experience: 2 - 4 years of experience Job Summary: We are looking for a Lead Data Engineer who will be responsible for building AWS Data pipelines as per requirements. Should have strong analytical skills, design capabilities, problem solving skills. Based on stakeholders’ requirements, should be able to propose solutions to the customer for review. Discuss pros/cons of different solution designs and optimization strategies. Responsibilities: Provide technical and development support to clients to build and maintain data pipelines. Develop data mapping documents listing business and transformational rules. Develop, unit test, deploy and maintain data pipelines. Design a Storage Layer for storing tabular/semi-structured/unstructured data. Design pipelines for batch/real-time processing of large data volumes. Analyze source specifications and build data mapping documents. Identify and document applicable non-functional code sets and reference data across insurance domains. Understand profiling results and validate data quality rules. Utilize data analysis tools to construct and manipulate datasets to support analyses. Collaborate with and support Quality Assurance (QA) in building functional scenarios and validating results. Requirements: 2+ years’ experience developing and maintaining modern ingestion pipeline using technologies like (AWS pipelines, Lamda, Spark, Apache Nifi etc). Basic understanding of the MLOPs lifecycle (Data prep -> model training -> model deployment -> model inference -> model re-training). Should be able to design data pipelines for batch/real time using Lambda, Step Functions, API Gateway, SNS, S3. Hands on experience on AWS Cloud and its Native components like S3, Athena, Redshift & Jupyter Notebooks. Requirements Gathering - Active involvement during requirements discussions with project sponsors, defining the project scope and delivery timelines, Design & Development. Strong in Spark Scala & Python pipelines (ETL & Streaming). Strong experience in metadata management tools like AWS Glue. Strong experience in coding with languages like Java, Python. Good-to-have AWS Developer certified. Good-to-have Postman-API and Apache Airflow or similar schedulers experience. Working with cross-functional teams to meet strategic goals. Experience in high volume data environments. Critical thinking and excellent verbal and written communication skills. Strong problem-solving and analytical abilities should be able to work and deliver individually. Good knowledge of data warehousing concepts. Desired Skill Set : Lambda, Step Functions, API Gateway, SNS, S3 (unstructured data), DynamoDB (semi-structured data), Aurora PostgreSQL (tabular data), AWS Sagemaker, AWS CodeCommit/GitLab, AWS CodeBuild, AWS Code Pipeline, AWS ECR . Aboutthe Company: ValueMomentum is amongst the fastestgrowing insurance-focused IT services providersin North America.Leading insurers trust ValueMomentum with their core, digital and data transformation initiatives. Having grown consistently every year by 24%, we have now grown to over 4000 employees. ValueMomentum is committed to integrity and to ensuring that each team and employee is successful. We foster an open work culture where employees' opinions are valued. We believe in teamwork and cultivate a sense of fun, fellowship, and pride among our employees. Benefits: We at ValueMomentum offer you the opportunity to grow by working alongside the experts. Some of the benefits you can avail are: Competitive compensation package comparable to the best in the industry. Career Advancement : Individual Career Development, coaching and mentoring programs for professional and leadership skill development. Comprehensive training and certification programs. Performance Management : Goal Setting, continuous feedback and year-end appraisal. Reward & recognition for the extraordinary performers. Benefits : Comprehensive health benefits, wellness and fitness programs. Paid time off and holidays. Culture : A highly transparent organization with an open-door policy and a vibrant culture
Posted 1 week ago
8.0 years
0 Lacs
India
Remote
🔍 We're Hiring: Data Engineer | Remote (India) 📅 Duration: 3 to 6 Months 🏠 Work From Home | Immediate Joiners Preferred We’re currently looking for experienced Data Engineers to join an exciting project on a contract basis. If you're passionate about building robust data pipelines and working across diverse technologies, we want to hear from you! Key Responsibilities Design and develop ETL/ELT pipelines to ingest, transform, and load data from various sources into our data platform. Integrate data from MySQL, IBM DB2, Microsoft SQL Server, APIs, files (CSV/JSON/XML), cloud services, and third-party platforms Build and optimize data models (star/snowflake schemas) to support business intelligence and analytics Collaborate with data analysts, data scientists, and business stakeholders to understand data needs Ensure data quality, consistency, and governance through validation, monitoring, and automated checks Implement and maintain data orchestration workflows (e.g., Airflow, Azure Data Factory, or equivalent) Participate in the modernization of legacy pipelines (e.g., SSIS, Python scripts) to modern frameworks and cloud platforms Ensure adherence to data security, privacy, and compliance policies Required Qualifications 4–8 years of hands-on experience as a Data Engineer Strong proficiency in SQL and experience with data modeling techniques Experience with ETL/ELT tools (e.g., SSIS, Apache Airflow, DBT, Azure Data Factory, or similar) Proficiency in at least one programming language: Python or Java Experience with cloud data platforms (e.g., Azure, AWS, GCP) Familiarity with data warehouse technologies (e.g., Snowflake, Redshift, Synapse, BigQuery) Knowledge of version control systems (e.g., Git) and basic CI/CD practices Interested? Click Easy Apply to submit your application via LinkedIn. We're reviewing profiles on a rolling basis — apply now to be considered early! #DataEngineer #ETL #HiringNow #RemoteJobs #ContractRole #SQL #Azure #Airflow #Snowflake #IndiaJobs
Posted 1 week ago
8.0 - 12.0 years
8 - 12 Lacs
Pune, Bengaluru
Hybrid
Role & responsibilities - Overall 8+ years of prior experience as Data engineer/ Data analyst/ BI Engineer. - At least 5 years of Consulting or client service delivery experience on Amazon AWS (AWS) - At least 5 years of experience in developing data ingestion, data processing and analytical pipelines for big data, relational databases, NoSQL and data warehouse solutions - Minimum of 5 years of hands-on experience in AWS and Big Data technologies such as Python, SQL, EC2, S3, Lambda, Spark/SparkSQL, Redshift, Snowflake, Snaplogic. - Prior experience on Snaplogic, AWS Glue, Lambda is must to have. - 3-5+ years of hands on experience in programming languages such as python, pyspark, spark, SQL,. - 2+ years of experience with DevOps tools such as GitLabs, Jenkins, Code Build, CodePipeline, CodeDeploy, etc. - Bachelors or higher degree in Computer Science or a related discipline. - AWS certification like Solution Architect Associate or Associate AWS Developer or AWS Big Data Specialty (nice to have).
Posted 1 week ago
3.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
About Spendflo Procurement today is broken - slow, siloed, and outdated. At Spendflo , we're on a mission to redefine modern procurement. We’re building the go-to platform for high-growth companies to manage procurement, renewals, spend visibility, and cost optimisation - all in one place. Spendflo is a company driven by ownership that operates under a working model where experimentation is encouraged, learning is continuous, and every voice helps to shape the product. Role Overview We seek a Product Analyst who will partner with Product Managers to uncover insights and ship data‑driven features. You will own the full data lifecycle—from extracting raw events in MongoDB to shaping dashboards and assisting product managers in building AI agents. What You’ll Do Perform data‑led product and customer research using AI‑assisted analytics Collaborate with Product Managers to define success metrics, design experiments, and validate hypotheses. Design, build, and maintain ELT pipelines that move data from MongoDB and SaaS APIs into our cloud data warehouse (Redshift). Write efficient MongoDB aggregation queries and optimise indexes to ensure performance. Produce clear, compelling dashboards in Looker, Tableau, or similar BI tools; enable self‑serve insights for stakeholders. Work closely with Engineering, Design, and Product teams to surface insights and influence roadmaps. What We’re Looking For 1–3 years in a product analytics, data engineering, or similar role. Demonstrated experience working alongside Product Managers in an agile environment. Strong SQL skills and hands‑on proficiency with MongoDB (including aggregation framework). Practical experience building data pipelines and using cloud warehouses like Redshift Excellent communication skills with the ability to translate data into stories. Nice-to-Have Familiarity with dimensional modelling and cloud warehouses (Snowflake, BigQuery, Redshift, or similar). Comfortable applying AI/ML techniques and LLM frameworks (e.g., LangChain, OpenAI API) to solve business problems. Previous experience in early-stage startups (Series A/B) where rapid iteration and ownership were part of the day-to-day. Why Join Us This is a unique opportunity to join a category-defining company at a pivotal stage. You’ll get to assist in building impactful products, work alongside high-performing teams, and help shape the future of how businesses manage procurement. If you're excited by complex challenges and are ready to modernise the procurement industry, we’d love to talk to you.
Posted 1 week ago
5.0 years
0 Lacs
Surat, Gujarat, India
On-site
Position : Technical Lead Location : Surat, Gujarat. (Onsite) ✅ Key Responsibilities 🚀 Architecture & System Design · Define scalable, secure, and modular architectures. · Implement high-availability patterns (circuit breakers, autoscaling, load balancing). · Enforce OWASP best practices, role-based access, and GDPR/PIPL compliance. 💻 Full-Stack Development · Oversee React Native & React.js codebases; mentor on state management (Redux/MobX). · Architect backend services with Node.js/Express; manage real-time layers (WebSocket, Socket.io). · Integrate third-party SDKs (streaming, ads, offerwalls, blockchain). 📈 DevOps & Reliability · Own CI/CD pipelines and Infrastructure-as-Code (Terraform/Kubernetes). · Drive observability (Grafana, Prometheus, ELK); implement SLOs and alerts. · Conduct load testing, capacity planning, and performance optimization. 👥 Team Leadership & Delivery · Mentor 5–10 engineers, lead sprint planning, code reviews, and Agile ceremonies. · Collaborate with cross-functional teams to translate roadmaps into deliverables. · Ensure on-time feature delivery and manage risk logs. 🔍 Innovation & Continuous Improvement · Evaluate emerging tech (e.g., Layer-2 blockchain, edge computing). · Improve development velocity through tools (linters, static analysis) and process optimization. 📌 What You’ll Need · 5+ years in full-stack development, 2+ years in a lead role · Proficient in: React.js, React Native, Node.js, Express, AWS, Kubernetes · Strong grasp of database systems (PostgreSQL, Redis, MongoDB) · Excellent communication and problem-solving skills · Startup or gaming experience a bonus 🎯 Bonus Skills · Blockchain (Solidity, smart contracts), streaming protocols (RTMP/HLS) · Experience with analytics tools (Redshift, Metabase, Looker) · Prior exposure to monetization SDKs (PubScale, AdX)
Posted 1 week ago
7.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
UWorld is the worldwide leader in online test prep for college entrance, undergraduate, graduate, and professional licensing exams throughout the United States. Since 2003, over 2 million students have trusted us to help them prepare for high-stakes examinations. We are looking for a talented lighting and shading artist who can work within a professional team to develop high-quality lighting scenes related to medical and scientific animations. Your Role: Develop light-rig for sequence and master 3D - precom Design and implement direct (key) lighting and reflected lighting and shadows for complex shots that meet and enhance art and tone direction Ensure that assigned shots fit the continuity of the sequence Produce photo-realistic shaders and materials mainly for human anatomy with details according to the references Must be a keen observer of reality with a near-perfect understanding of relative scale and dimensions, especially in terms of human anatomy Maintain or exceed a consistent level of productivity while meeting deadlines and producing high-quality work Assist with project clean-up and archiving on an ongoing basis Participate in review as a team member to determine various design solutions by providing feedback to other members of the production Ability to work quickly and efficiently under tight deadlines Follow design guidelines, shot naming conventions, and other technical constraints Keep up to date with the latest developments in lighting tools and technologies Follow company creative head and production pipeline, proposing approaches that stimulate positive enhancements Ability to solve complex technical problems that occur within the environment Effective time-management skills, including the ability to meet deadlines and follow instructions, and established protocols/procedures Your Experience: Degree in fine arts or equivalent with a thorough understanding of anatomy, or 7+ years of industry experience specializing in lighting Excellent knowledge of lighting and color theory Strong abilities in real-time rendering and post FX Ability to create modules and reusable lighting set-ups Ability to composite and render elements together to create a final frame Intermediate-level compositing experience (Nuke) Autodesk Maya skills with experience in Arnold, redshift lighting and rendering Experience in producing different render elements and passes Soft Skills: Excellent interpersonal skills with demonstrated ability to articulate ideas clearly, concisely, and persuasively, along with ability to understand direction and accept feedback Strong leadership skills with specific experience managing product priorities, setting delivery expectations, and delivering software enhancements on schedule Organization skills including situational leadership, technical leadership, high attention to detail, and proven ability to resolve field escalations with minimal impact to production schedules Ability to work effectively within a changing environment experiencing high growth Exceptional follow-through, personal drive, and desire to make a difference
Posted 1 week ago
7.0 years
0 Lacs
Gurugram, Haryana, India
On-site
We are seeking a seasoned Engineering Manager to lead the development of our end-to-end Video Telematics and Cloud Analytics Platform . This role demands a strong technical leader with experience across embedded systems , AI/ML , computer vision , cloud infrastructure , and data analytics . You will be responsible for driving technical execution across multidisciplinary teams — overseeing everything from edge-based video analytics to cloud-hosted dashboards and insights. Key Responsibilities: Platform Leadership Own and drive the development of the complete video telematics ecosystem , covering edge AI, video processing, cloud platform, and data insights. Architect and oversee scalable and secure cloud infrastructure for video streaming, data storage, OTA updates, analytics, and alerting systems. Define the data pipeline architecture for collecting, storing, analyzing, and visualizing video, sensor, and telematics data. Cross-functional Engineering Management Lead and coordinate cross-functional teams: Cloud/backend team for infrastructure, APIs, analytics dashboards, and system scalability. AI/ML and CV teams for DMS/ADAS model deployment and real-time inference. Hardware/Embedded teams for firmware integration, camera tuning, and SoC optimization. Collaborate with product and business stakeholders to define the roadmap, features, and timelines. Data Analytics Work closely with data scientists to build meaningful insights and reports from video and sensor data. Drive implementation of real-time analytics, fleet safety scoring, and predictive insights using telematics data. Operational Responsibilities Own product quality and system reliability across all components. Support product rollout, monitoring, and updates in production. Manage resources, mentor engineers, and build a high-performing development team. Ensure adherence to industry standards for security, privacy (GDPR), and compliance (e.g., GSR, AIS-140). Requirements: Must-Have: Bachelor’s or Master’s in Computer Science, Electrical Engineering, or related field. 7+ years of experience in software/product engineering, with 2+ years in a technical leadership or management role. Deep understanding of cloud architecture , video pipelines , edge computing , and microservices . Proficient in AWS/GCP/Azure, Docker/Kubernetes, serverless computing, and RESTful API design. Solid grasp of AI/ML integration and computer vision workflows (model lifecycle, optimization, deployment). Experience in data pipelines , SQL/NoSQL databases, and analytics tools (e.g., Redshift, Snowflake, Grafana, Superset). Good-to-Have: Prior work on automotive , fleet , or video telematics solutions. Experience with camera hardware (MIPI, ISP tuning), compression codecs (H.264/H.265), and event-based recording. Familiarity with telematics protocols (CAN, MQTT) and geospatial analytics. Working knowledge of data privacy regulations (GDPR, CCPA).
Posted 1 week ago
7.0 years
40 - 45 Lacs
Mumbai Metropolitan Region
On-site
This role is for one of the Weekday's clients Salary range: Rs 4000000 - Rs 4500000 (ie INR 40-45 LPA) Min Experience: 7 years Location: Mumbai, India JobType: full-time Requirements Requirements: Minimum 7 years of experience in data engineering Bachelor's or Master's degree in Computer Science, Engineering, or a related field Demonstrated experience leading data engineering teams with full technical ownership Strong command over PostgreSQL and advanced SQL techniques Proficiency in Python for data-related workflows Solid understanding of data architecture, ELT pipeline design, and data engineering methodologies Familiarity with agile development practices and team collaboration Preferred experience with: dbt (data build tool) AWS data pipeline services (e.g., Glue, Redshift, S3) Data modeling and statistical analysis techniques Key Responsibilities: Lead, mentor, and support a team of data engineers in delivering scalable and high-quality data solutions Oversee the planning and execution of data initiatives aligned with business and product strategies Architect, build, and maintain robust ELT pipelines to process and transform large datasets Ensure data accuracy, integrity, and availability across various platforms and systems Work closely with product managers, analysts, and other engineering teams to support data-driven decision making Design and manage scalable data models tailored for analytics and machine learning applications Conduct exploratory data analysis to uncover insights and inform modeling strategies Identify inefficiencies in data processes and lead automation or optimization efforts Stay informed about emerging technologies and best practices in data engineering and bring innovative solutions to the team Skills: PostgreSQL SQL Python ELT Pipelines Data Architecture dbt AWS (Glue, Redshift, S3) Agile Data Modeling Team Leadership
Posted 1 week ago
3.0 - 5.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Job Summary We are looking for a Data Engineering QA Engineer who will be responsible for testing, validating, and ensuring the quality of our data pipelines, data transformations, and analytics platforms. The role involves creating test strategies, designing test cases, and working closely with Data Engineers to ensure the accuracy, integrity, and performance of our data solutions. Key Responsibilities: Data Pipeline Testing : Test and validate data pipelines (ETL/ELT processes) to ensure accurate data movement, transformation, and integration across different platforms. Data Quality Assurance : Define and implement data quality checks, perform exploratory data testing, and monitor data for accuracy and consistency. Test Automation : Design and implement automated testing strategies for data validation using frameworks/tools like PyTest, SQL queries, or custom scripts. Collaboration : Work closely with Data Engineers, Data Analysts, and Product Managers to understand requirements and deliver test plans and strategies aligned with data engineering processes. Performance Testing : Analyze and test the performance and scalability of large-scale data solutions to ensure they meet business requirements. Defect Management : Identify, track, and resolve data quality issues and bugs, working with teams to ensure timely resolution. Compliance : Ensure that data engineering solutions comply with data governance, privacy, and security standards. Reporting : Generate testing reports and provide insights into data quality and system performance. Required Skills & Experience: Proven Experience : 3-5 years of experience as a QA Engineer, Data Engineer, or similar role in data-focused environments. Strong SQL Skills : Proficiency in writing complex SQL queries to validate and test data. ETL/ELT Experience : Familiarity with ETL/ELT tools and processes like DBT, Apache Airflow, Talend, Informatica, etc. Automation Frameworks : Experience with test automation frameworks and tools such as PyTest, Robot Framework, or similar. Cloud Platforms : Knowledge of cloud services (AWS, GCP, Azure) and tools like Redshift, BigQuery, Snowflake, or Databricks. Programming : Strong scripting and programming skills in Python, Java, or a similar language. Data Warehousing : Understanding of data warehousing concepts and best practices for data validation. Version Control : Experience using version control tools (e.g., Git) for code and testing artifacts. Agile Environment : Experience working in Agile/Scrum teams and knowledge of CI/CD pipelines. Attention to Detail : Meticulous when it comes to data validation, ensuring data accuracy and quality at every step. Nice to Have: Big Data Experience : Exposure to big data tools such as Hadoop, Spark, or Kafka. Data Governance & Compliance : Familiarity with GDPR, CCPA, or other data privacy regulations. BI Tools : Experience working with BI tools like Tableau, PowerBI, or Looker. Certification : AWS/GCP Data Engineering or QA certifications. Education: Bachelor's degree in Computer Science, Information Systems, Data Science, or a related field.
Posted 1 week ago
5.0 years
0 Lacs
India
On-site
Minimum Requirements 5+ years of experience in Data Engineering, including more than 5 years of hands-on expertise with Databricks, AWS EMR, Redshift, and various database management systems. Advanced SQL RDBMS design and query building skills (Oracle, SQL Server, Databricks, Redshift etc.) Proficient in programming languages like Python and PySpark. Experience in data normalization, data modelling, Decoupled ETL, SQL modelling. Experience profiling, manipulating, and merging massive data set using Big Data technologies, preferably Databricks & AWS Analytics. Exposure to Unix or other shell scripting, job scheduling using Control-M or equivalent tools Attention to detail and logical based thinking in building, testing, and reviewing data pipelines containing multi-disciplinary information Commitment to building using established coding and naming standards Experience in guiding ad-hoc technical solutions to the team Preferred Languages: Python, PySpark, SparkSQL, SQL, JAVA Exposure to SAP ERP system, Salesforce.com, etc. is preferred Visualization tool experience, especially with Tableau or Power BI Working in larger organizations with established individual responsibilities & teams. We Value Databricks & AWS Cloud Certifications Experience in business domains – supply chain, pricing, sales & marketing Experience in working with data scientists Strong problem solver who can invent new techniques and approaches if necessary Ability to work in a fast-paced environment
Posted 1 week ago
7.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
TCS Hiring for AWS Cloud Data Engineer_Redshift_PAN India Experience: 7 to 13 Years Only Job Location: PAN India TCS Hiring for AWS Cloud Data Engineer_Redshift_PAN India Required Technical Skill Set: Working on EMR, good knowledge of CDK and setting up ETL and Data pipeline Coding - Python AWS EMR, Athina, Supergule, Sagemaker, Sagemaker Studio Data security & encryption ML / AI Pipeline Redshift AWS Lambda Nice to have skills & experience: Oracle/SQL Database administration Data modelling RDS & DMS Serverless Architectrure DevOps 3+ years of industry experience in Data Engineering on AWS cloud with glue, redshift , Athena experience. · Ability to write high quality, maintainable, and robust code, often in SQL, Scala and Python. · 3+ Years of Data Warehouse Experience with Oracle, Redshift, PostgreSQL, etc. Demonstrated strength in SQL, python/pyspark scripting, data modeling, ETL development, and data warehousing · Extensive experience working with cloud services (AWS or MS Azure or GCS etc.) with a strong understanding of cloud databases (e.g. Redshift/Aurora/DynamoDB), compute engines (e.g. EMR/Glue), data streaming (e.g. Kinesis), storage (e.g. S3) etc. · Experience/Exposure using big data technologies (Hadoop, Hive, Hbase, Spark, EMR, etc.) Kind Regards, Priyankha M
Posted 1 week ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
27294 Jobs | Dublin
Wipro
13935 Jobs | Bengaluru
EY
9574 Jobs | London
Accenture in India
8669 Jobs | Dublin 2
Amazon
7820 Jobs | Seattle,WA
Uplers
7606 Jobs | Ahmedabad
IBM
7142 Jobs | Armonk
Oracle
6920 Jobs | Redwood City
Muthoot FinCorp (MFL)
6164 Jobs | New Delhi
Capgemini
5313 Jobs | Paris,France