Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
3.0 - 7.0 years
0 Lacs
karnataka
On-site
lululemon is an innovative performance apparel company that specializes in providing high-quality products for yoga, running, training, and other athletic activities. Our focus lies in developing technical fabrics and functional designs to create products and experiences that support individuals in their journey of movement, growth, connection, and overall well-being. At lululemon, we attribute our success to our dedication to innovation, our commitment to our people, and the meaningful relationships we establish within the communities we serve. We are dedicated to driving positive change and fostering a healthier and more prosperous future. A key aspect of our mission involves cultivating an environment that is equitable, inclusive, and growth-oriented for all our team members. Our India Tech Hub is instrumental in enhancing our technological capabilities across various domains such as Product Systems, Merchandising and Planning, Digital Presence, distribution and logistics, and corporate systems. The team in India collaborates closely with our global team on projects of strategic significance. Joining the Content team at lululemon offers an exciting opportunity to contribute to a fast-paced environment that is constantly exploring new initiatives to support our rapidly expanding business. We are a team that embraces cutting-edge technology and is driven by a continuous pursuit of improvement. Innovation is at the core of our ethos, and we encourage each other to step out of our comfort zones and embrace new challenges. Professional and personal growth is paramount to us, and we believe in learning from failures to pave the way for a brighter future. We foster an environment where team members can freely share feedback and ideas, promoting ongoing organizational growth. Operating within an agile methodology, we collaborate with product teams across various functions and a core commerce platform team. At lululemon, we prioritize creating a culture of enjoyment and lightheartedness in our daily work routine. We recognize the strength in unity and celebrate the fact that we are more powerful as a team rather than as individuals. Responsibilities: - Develop Statistical/Machine Learning models/analysis for Merchandising and Planning Business problems - Play a key role in all stages of the Data science project life cycle - Collaborate with Product Management and Business teams to gain Business understanding and collect requirements - Identify necessary data sources and automate the collection process - Conduct pre-processing and exploratory data analysis - Evaluate and interpret results, presenting findings to stakeholders and business leaders - Collaborate with engineering and product development teams to deploy models into production systems when applicable Requirements and Skills: - Demonstrated experience in delivering technical solutions using Time Series/Machine Learning techniques - Proficient in applied Statistical skills, including familiarity with Statistical tests, distributions, etc. - Strong expertise in applied Machine Learning skills such as Time Series, Regression Analysis, Supervised and Unsupervised Learning - Proficient Programming Skills in Python and database query languages like SQL, with additional familiarity with Snowflake & Databricks being advantageous - Experience with time series forecasting techniques like ARIMA, Prophet, Deep AR - Familiarity with data visualization libraries such as Plotly, Business intelligence tools like PowerBI, Tableau - Excellent communication and presentation abilities - Previous experience in the Retail industry Responsibilities: - Identify valuable data sources and automate collection processes - Preprocess structured and unstructured data - Analyze large datasets to identify trends and patterns - Develop predictive models and machine-learning algorithms - Employ ensemble modeling techniques - Utilize data visualization methods to present information effectively - Propose actionable solutions and strategies to address business challenges - Collaborate with engineering and product development teams,
Posted 1 week ago
8.0 - 13.0 years
0 Lacs
Pune
Work from Office
Responsibilities: * Design, develop & maintain data pipelines using SQL, AWS & Snowflake. * Collaborate with cross-functional teams on data warehousing projects.
Posted 1 week ago
0.0 - 1.0 years
1 - 2 Lacs
Lucknow
Work from Office
Develop and maintain robust ETL (Extract, Transform, Load) pipelines Ensure data quality, integrity, and security across systems Integrate data from various sources including APIs, databases, and cloud platforms Familiarity with cloud platforms Required Candidate profile Proficiency in SQL and Python Knowledge of data modeling, warehousing, and pipeline orchestration tools Strong understanding of database systems (relational and NoSQL)
Posted 1 week ago
7.0 - 12.0 years
7 - 17 Lacs
Pune, Chennai
Hybrid
Immediate opening for Data engineer @ Pune Location!!! EXP: 7 + Yrs CTC: ECTC: NP : Immediate to 1 week (Currently serving notice period) Location : Pune JD: Data engineer, sql, python, pyspark, hive, Airflow, snowflake If interested candidates kindly share me resume srinivasan.jayaraman@servion.com
Posted 1 week ago
8.0 - 12.0 years
10 - 20 Lacs
Gurugram
Work from Office
Job Summary: We are seeking a highly experienced and motivated Snowflake Data Architect & ETL Specialist to join our growing Data & Analytics team. The ideal candidate will be responsible for designing scalable Snowflake-based data architectures, developing robust ETL/ELT pipelines, and ensuring data quality, performance, and security across multiple data environments. You will work closely with business stakeholders, data engineers, and analysts to drive actionable insights and ensure data-driven decision-making. Key Responsibilities: Design, develop, and implement scalable Snowflake-based data architectures . Build and maintain ETL/ELT pipelines using tools such as Informatica, Talend, Apache NiFi, Matillion , or custom Python/SQL scripts. Optimize Snowflake performance through clustering, partitioning, and caching strategies. Collaborate with cross-functional teams to gather data requirements and deliver business-ready solutions. Ensure data quality, governance, integrity, and security across all platforms. Migrate legacy data warehouses (e.g., Teradata, Oracle, SQL Server) to Snowflake . Automate data workflows and support CI/CD deployment practices. Implement data modeling techniques including dimensional modeling, star/snowflake schema , normalization/denormalization. Support and promote metadata management and data governance best practices. Technical Skills (Hard Skills): Expertise in Snowflake : Architecture design, performance tuning, cost optimization. Strong proficiency in SQL , Python , and scripting for data engineering tasks. Hands-on experience with ETL tools: Informatica, Talend, Apache NiFi, Matillion , or similar. Proficient in data modeling (dimensional, relational, star/snowflake schema). Good knowledge of Cloud Platforms : AWS, Azure, or GCP. Familiar with orchestration and workflow tools such as Apache Airflow, dbt, or DataOps frameworks . Experience with CI/CD tools and version control systems (e.g., Git). Knowledge of BI tools such as Tableau, Power BI , or Looker . Certifications (Preferred/Required): Snowflake SnowPro Core Certification Required or Highly Preferred SnowPro Advanced Architect Certification – Preferred Cloud Certifications (e.g., AWS Certified Data Analytics – Specialty, Azure Data Engineer Associate) – Preferred ETL Tool Certifications (e.g., Talend, Matillion) – Optional but a plus Soft Skills: Strong analytical and problem-solving capabilities. Excellent communication and collaboration skills. Ability to translate technical concepts into business-friendly language. Proactive, detail-oriented, and highly organized. Capable of multitasking in a fast-paced, dynamic environment. Passionate about continuous learning and adopting new technologies. Why Join Us? Work on cutting-edge data platforms and cloud technologies Collaborate with industry leaders in analytics and digital transformation Be part of a data-first organization focused on innovation and impact Enjoy a flexible, inclusive, and collaborative work culture
Posted 1 week ago
2.0 - 7.0 years
5 - 8 Lacs
Noida
Work from Office
Develop and support ETL pipelines using Snowflake, ADF, Databricks, Python. Manage data quality, model design, Kafka/Airflow orchestration, and troubleshoot production issues in Agile teams. Required Candidate profile 1.5–3 yrs in ETL, Snowflake, ADF, Python, SQL. Knowledge of Databricks, Airflow, Kafka preferred. Bachelor's in CS/IT. Experience in data governance and cloud platforms is a plus.
Posted 1 week ago
4.0 - 7.0 years
7 - 17 Lacs
Bengaluru, Delhi / NCR, Mumbai (All Areas)
Work from Office
Key Responsibilities: Design, develop, and maintain data transformation pipelines using dbt/IICS on Snowflake . Write optimized SQL and Python scripts for complex data modeling and processing tasks. Collaborate with data analysts, engineers, and business teams to implement scalable ELT workflows. Create and manage data models , schemas , and documentation in dbt . Optimize Snowflake performance using best practices (clustering, caching, virtual warehouses). Manage data integration from data lakes , external systems, and cloud sources. Ensure data quality, lineage, version control, and compliance across all environments. Participate in code reviews, testing, and deployment activities using CI/CD pipelines. Required Skills: 58 years of experience in Data Engineering or Data Platform Development . Hands-on experience with Snowflake – data warehousing, architecture, and performance tuning. Proficient in dbt (Data Build Tool) – model creation, Jinja templates, macros, testing, and documentation. Hands-on experience in creating mapping and workflows in IICS and have extensive experience in performance tuning and troubleshooting activities Strong Python scripting for data transformation and automation. Advanced skills in SQL – writing, debugging, and tuning queries. Experience with Data Lake and Data Warehouse concepts and implementations. Familiarity with Git-based workflows and version control in dbt projects. Preferred Skills (Good to Have): Experience with Airflow , Dagster , or other orchestration tools. Knowledge of cloud platforms like AWS, Azure, or GCP. Exposure to BI tools like Power BI , Tableau , or Looker . Understanding Data Governance , Security , and Compliance . Experience in leading a development team
Posted 1 week ago
5.0 - 9.0 years
0 Lacs
karnataka
On-site
We are looking for a Data Scientist to join our dynamic team dedicated to developing cutting-edge AI-powered search and research tools that are revolutionizing how teams access information and make informed decisions. As a Data Scientist, you will play a crucial role in transforming complex datasets into valuable insights, making an impact at the forefront of productivity and intelligence tool development. Your responsibilities will include owning and managing the data transformation layer using dbt and SQL, designing scalable data models, maintaining business logic, creating intuitive dashboards and visualizations using modern BI tools, collaborating with various teams to uncover key insights, working with diverse structured and unstructured data sources such as Snowflake and MongoDB, and translating business questions into data-driven recommendations. Additionally, you will support experimentation and A/B testing strategies across teams. The ideal candidate for this role will have a minimum of 4-8 years of experience in analytics, data engineering, or BI roles, with strong proficiency in SQL, dbt, and Python (pandas, plotly, etc.). Experience with BI tools, dashboard creation, and working with multiple data sources is essential. Excellent communication skills are a must as you will collaborate across global teams. Familiarity with Snowflake, MongoDB, Airflow, startup experience, or a background in experimentation is considered a bonus. Joining our team means being part of a global effort to redefine enterprise search and research with a clear vision and strong leadership. If you are passionate about solving complex data challenges, enjoy working independently, and thrive in a collaborative environment with brilliant minds, this role offers an exciting opportunity for professional growth and innovation. Location: Abu Dhabi Experience: 4-8 years Role Type: Individual Contributor | Reports to Team Lead (Abu Dhabi),
Posted 1 week ago
5.0 - 9.0 years
0 Lacs
chennai, tamil nadu
On-site
As a Database Administrator SR at Sagent, you will play a crucial role in operationalizing data to create an efficient environment that drives value from analytics. Your primary responsibilities will include managing backend assets, configuring and setting up cloud data assets and pipelines. As a DataOps Engineer, you will be expected to have extensive experience in handling various data assets such as Postgres, Snowflake, and GCP-based databases. Your expertise will be utilized in reducing development time, enhancing data quality, and providing guidance to data engineers. To qualify for this position, you should hold a Bachelor's Degree in Computer Science or possess equivalent work experience along with at least 5 years of experience in Data Ops. Hands-on experience in working with Postgres, Snowflake administration, Google Cloud Platform, and setting up CICD pipelines on Azure DevOps is essential. Proficiency in SQL, including performance tuning, and the ability to work collaboratively in a fast-paced environment on multiple projects concurrently are key requirements. As a DataOps Engineer at Sagent, you will be responsible for tasks such as building and optimizing data pipelines, automating processes to streamline data processing, managing the production of data pipelines, designing data engineering assets, and facilitating collaboration with other team members. Your role will also involve testing data pipelines at various stages, adopting new solutions, ensuring data security standards, and continuously improving data flow. Joining Sagent comes with a range of perks, including participation in benefit programs from Day #1, Remote/Hybrid workplace options, Group Medical Coverage, Group Personal Accidental, Group Term Life Insurance Benefits, Flexible Time Off, Food@Work, Career Pathing, Summer Fridays, and more. Sagent is at the forefront of transforming the mortgage servicing industry by providing a modern customer experience throughout the loan servicing process. By joining our team, you will be part of a dynamic environment that values innovation and aims to disrupt the lending and housing sector. If you are looking for a rewarding opportunity to contribute to a mission-driven company and be part of a team that is reshaping the future of lending and housing, Sagent is the place for you.,
Posted 1 week ago
6.0 - 10.0 years
0 Lacs
karnataka
On-site
NTT DATA strives to hire exceptional, innovative, and passionate individuals who want to grow with the organization. If you aspire to be part of an inclusive, adaptable, and forward-thinking workplace, we encourage you to apply now. We are currently looking for a Data Engineer with proficiency in SQL, Snowflake, AWS, Git, and Jenkins to join our team in Bangalore, Karnataka, India. As a Data Engineer, you will be responsible for deploying code using Git and Jenkins, working with large-scale data sets, and having exposure to Relational and NoSQL Databases and ETL tools. Knowledge of Snowflake, AWS, Python, data warehousing, and data modeling is essential for this role. Key Skills: - Proficiency in SQL, Snowflake, AWS - Experience with Git and Jenkins for code deployment - Exposure to large-scale data sets - Familiarity with Relational and NoSQL Databases and ETL tools - Knowledge of Snowflake, AWS, Python, data warehousing, and data modeling Good To Have Skills: - Passion for data-driven enterprise business strategy - Strong communication skills for both technical and business interactions - Ability to build trust and collaborate with cross-functional teams - Self-directed and capable of managing complex projects independently - Understanding of continuous integration techniques - Experience in Agile/Scrum methodologies - Strong analytical, diagnostic, and problem-solving abilities - Results-oriented with a focus on delivering business value - Experience in the financial sector is a plus Minimum Experience Required: 6-9 Years General Expectations: 1) Excellent communication skills 2) Willingness to work in a 10:30 AM to 8:30 PM shift 3) Flexibility to work at client locations in Bangalore 4) Ready to work in a hybrid office environment 5) Full return to the office expected by 2025 Pre-Requisites: 1) Genuine and digitally signed Form16 for all employments 2) Employment history details present in UAN/PPF statements 3) Candidates must undergo video screening to verify their authenticity and work setup 4) Real work experience on mandatory skills is required 5) Notice period of 0 to 3 weeks 6) Screening for any gaps in education or employment About NTT DATA: NTT DATA is a trusted global innovator in business and technology services, serving 75% of the Fortune Global 100. With a commitment to helping clients innovate and succeed in the long term, NTT DATA offers diverse expertise across more than 50 countries. As a Global Top Employer, we provide services in business and technology consulting, data and artificial intelligence, industry solutions, and application development. We are a leading provider of digital and AI infrastructure, investing in R&D to support organizations in their digital transformation journey. For more information, visit us at us.nttdata.com.,
Posted 1 week ago
5.0 - 14.0 years
0 Lacs
noida, uttar pradesh
On-site
As a Software Architect at Adobe, you will play a crucial role in defining and evolving the architectural vision and roadmap for our products. Your responsibilities will include ensuring alignment with business goals, providing proactive thought leadership, designing and overseeing the implementation of highly scalable distributed systems, driving the technical delivery of AI-powered features, exploring and implementing AI solutions, and fostering a culture of technical excellence and collaboration within the engineering organization. You will need a passion and love for what you do, along with 14+ years of experience in software development and 5+ years in a software architect role. Deep expertise in designing and implementing highly scalable architectures, proficiency in Java, Spring Boot, Rest Services, MySQL or Postgres, MongoDB, Kafka, and experience with cloud technologies such as AWS and/or Azure are essential. Additionally, a strong understanding of Artificial Intelligence (AI), particularly Generative AI (GenAI) and Agents, is required. The ideal candidate will be ambitious, thrive in a fast-paced environment, demonstrate a strong bias to action, and possess excellent interpersonal, analytical, problem-solving, and conflict resolution skills. Strong business acumen, self-motivation, and the ability to mentor a team towards high-quality deliverables are also key attributes. A Bachelor's or Master's degree in Computer Science, Engineering, or a related technical field is necessary for this role. If you are looking to join a team of passionate engineers at Adobe, drive technical excellence, and contribute to shaping the technology stack of next-gen products and offerings, then this Software Architect position is perfect for you. Join us in our mission to transform how companies interact with customers across every screen and be part of a culture that values innovation, collaboration, and continuous learning.,
Posted 1 week ago
8.0 - 12.0 years
0 Lacs
pune, maharashtra
On-site
As a Tech Lead, Data Architecture at Fiserv, you will play a crucial role in our data warehousing strategy and implementation. Your responsibilities will include designing, developing, and leading the adoption of Snowflake-based solutions to ensure efficient and secure data systems that drive our business analytics and decision-making processes. Collaborating with cross-functional teams, you will define and implement best practices for data modeling, schema design, and query optimization in Snowflake. Additionally, you will develop and manage ETL/ELT workflows to ingest, transform, and load data from various resources into Snowflake, integrating data from diverse systems like databases, APIs, flat files, and cloud storage. Monitoring and tuning Snowflake performance, you will manage caching, clustering, and partitioning to enhance efficiency while analyzing and resolving query performance bottlenecks. You will work closely with data analysts, data engineers, and business users to understand reporting and analytic needs, ensuring seamless integration with BI Tools like Power BI. Your role will also involve collaborating with the DevOps team for automation, deployment, and monitoring, as well as planning and executing strategies for scaling Snowflake environments as data volume grows. Keeping up to date with emerging trends and technologies in data warehousing and data management is essential, along with providing technical support, troubleshooting, and guidance to users accessing the data warehouse. To be successful in this role, you must have 8 to 10 years of experience in data management tools like Snowflake, Streamsets, and Informatica. Experience with monitoring tools like Dynatrace and Splunk, Kubernetes cluster management, and Linux OS is required. Additionally, familiarity with containerization technologies, cloud services, CI/CD pipelines, and banking or financial services experience would be advantageous. Thank you for considering employment with Fiserv. To apply, please use your legal name, complete the step-by-step profile, and attach your resume. Fiserv is committed to diversity and inclusion and does not accept resume submissions from agencies outside of existing agreements. Beware of fraudulent job postings not affiliated with Fiserv to protect your personal information and financial security.,
Posted 1 week ago
6.0 - 10.0 years
0 Lacs
karnataka
On-site
You are a skilled Data Engineer with expertise in Data Modeling, SQL, Snowflake, Python, AWS, and NoSQL. Your primary responsibility will be designing and implementing scalable data solutions to ensure efficient data storage, retrieval, and processing. Experience in NoSQL Data Modeling would be an additional advantage. Your key responsibilities will include designing and implementing data models to support analytical and operational workloads, developing and managing SQL queries for data extraction, transformation, and loading (ETL), working extensively with Snowflake to build scalable data pipelines and warehouses, developing Python scripts for data processing and automation, implementing AWS services for cloud-based data solutions, working with NoSQL databases to handle semi-structured and unstructured data, ensuring data accuracy, consistency, and security across various storage systems, and collaborating with data scientists, analysts, and software engineers to deliver business insights. You must possess strong experience in Data Modeling for both Relational and NoSQL databases, proficiency in SQL with practical experience in database technologies, hands-on experience with Snowflake for data warehousing, strong programming skills in Python for data processing, expertise in AWS cloud services for data infrastructure, and experience working with NoSQL databases. It would be beneficial if you have experience in NoSQL Data Modeling best practices. Location: Bangalore Experience: 6-9 Years,
Posted 1 week ago
3.0 - 7.0 years
0 Lacs
karnataka
On-site
As a Data Engineer for our data-rich e-commerce platform catering to the life sciences sector, your primary responsibility will be to support infrastructure, develop data pipelines, and deploy pricing logic. You will play a crucial role in ensuring the usability and interface design of internal tools that facilitate experimentation, pricing configuration, and real-time monitoring. Your key responsibilities will include: - Building and maintaining ETL pipelines for pricing, shipping, and behavioral datasets - Collaborating with data scientists and product managers to facilitate model development and experimentation - Developing APIs or backend logic to implement dynamic pricing algorithms - Creating internal dashboards or tools with a strong focus on usability and performance - Ensuring data quality, reliability, and documentation across all systems - Performing feature engineering to support predictive and optimization algorithms - Aggregating and transforming high-dimensional datasets at scale to enhance modeling efficiency and robustness - Optimizing algorithm performance for real-time and large-scale deployment To excel in this role, you must possess: - Flexibility to thrive in a dynamic, startup-like environment and tackle diverse tasks with innovative solutions - 3+ years of experience in data engineering or backend development - Proficiency in Databricks and distributed data processing frameworks - Strong skills in Python, SQL, and cloud-based platforms such as AWS, BigQuery, and Snowflake - Demonstrated expertise in designing user-friendly internal tools and interfaces - Familiarity with experimentation systems and monitoring infrastructure - Experience in efficiently handling large-scale, high-dimensional datasets - Preferred domain knowledge in e-commerce, with a strong advantage for familiarity with the pharmaceutical or scientific supply sector This is a contract role with the potential for conversion to full-time, starting from August to December 2025. The preferred location for this position is Bangalore, with alternatives in Mumbai and Kathmandu. If you are looking to contribute to a cutting-edge platform and drive impactful changes in the life sciences industry, we welcome your application.,
Posted 1 week ago
4.0 - 8.0 years
0 Lacs
ahmedabad, gujarat
On-site
DXFactor is a US-based tech company working with customers globally. We are a certified Great Place to Work and currently seeking candidates for the role of Data Engineer with 4 to 6 years of experience. Our presence spans across the US and India, specifically in Ahmedabad. As a Data Engineer at DXFactor, you will be expected to specialize in SnowFlake, AWS, and Python. Key Responsibilities: - Design, develop, and maintain scalable data pipelines for both batch and streaming workflows. - Implement robust ETL/ELT processes to extract data from diverse sources and load them into data warehouses. - Build and optimize database schemas following best practices in normalization and indexing. - Create and update documentation for data flows, pipelines, and processes. - Collaborate with cross-functional teams to translate business requirements into technical solutions. - Monitor and troubleshoot data pipelines to ensure optimal performance. - Implement data quality checks and validation processes. - Develop and manage CI/CD workflows for data engineering projects. - Stay updated with emerging technologies and suggest enhancements to existing systems. Requirements: - Bachelor's degree in Computer Science, Information Technology, or a related field. - Minimum of 4+ years of experience in data engineering roles. - Proficiency in Python programming and SQL query writing. - Hands-on experience with relational databases (e.g., PostgreSQL, MySQL) and NoSQL databases (e.g., MongoDB, Cassandra). - Familiarity with data warehousing technologies such as Snowflake, Redshift, and BigQuery. - Demonstrated ability in constructing efficient and scalable data pipelines. - Practical knowledge of batch and streaming data processing methods. - Experience in implementing data validation, quality checks, and error handling mechanisms. - Work experience with cloud platforms, particularly AWS (S3, EMR, Glue, Lambda, Redshift) and/or Azure (Data Factory, Databricks, HDInsight). - Understanding of various data architectures including data lakes, data warehouses, and data mesh. - Proven ability to debug complex data flows and optimize underperforming pipelines. - Strong documentation skills and effective communication of technical concepts.,
Posted 1 week ago
9.0 - 13.0 years
0 Lacs
hyderabad, telangana
On-site
You have a great opportunity to join our team as a Data Architect with 9+ years of experience. In this role, you will be responsible for designing, implementing, and managing cloud-based solutions on AWS and Snowflake. Your main tasks will include working with stakeholders to gather requirements, designing solutions, developing and executing test plans, and overseeing the information architecture for the data warehouse. To excel in this role, you must have a strong skillset in Snowflake, DBT, and Data Architecture Design experience in Data Warehouse. Additionally, it would be beneficial to have Informatica or any ETL Knowledge or Hands-On Experience, as well as Databricks understanding. You should have 9 - 11 years of IT experience with 3+ years of Data Architecture experience in Data Warehouse and 4+ years in Snowflake. As a Data Architect, you will need to optimize Snowflake configurations and data pipelines to improve performance, scalability, and overall efficiency. You should have a deep understanding of Data Warehousing, Enterprise Architectures, Dimensional Modeling, Star & Snowflake schema design, Reference DW Architectures, ETL Architect, ETL (Extract, Transform, Load), Data Analysis, Data Conversion, Transformation, Database Design, Data Warehouse Optimization, Data Mart Development, and Enterprise Data Warehouse Maintenance and Support. In addition to your technical responsibilities, you will also be required to maintain detailed documentation for data solutions and processes, provide training and leadership to share expertise and best practices with the team, and collaborate with the data engineering team to ensure that data solutions are developed according to best practices. If you have 10+ years of overall experience in architecting and building large-scale, distributed big data products, expertise in designing and implementing highly scalable, highly available Cloud services and solutions, experience with AWS and Snowflake, as well as a strong understanding of data warehousing and data engineering principles, then this role is perfect for you. This is a full-time position based in Hyderabad, Telangana, with a Monday to Friday work schedule. Therefore, you must be able to reliably commute or plan to relocate before starting work. As part of the application process, we would like to know your notice period, years of experience in Snowflake, Data Architecture experience in Data Warehouse, current location, willingness to work from the office in Hyderabad, current CTC, and expected CTC. If you meet the requirements and are excited about this opportunity, we look forward to receiving your application. (Note: Experience: total work: 9 years is required for this position),
Posted 1 week ago
5.0 - 9.0 years
0 Lacs
chennai, tamil nadu
On-site
As a Data Project Manager at Myridius, you will be responsible for leading projects from start to finish, ensuring successful delivery within budget and timeline constraints. You will define project scope, objectives, and milestones, effectively communicate them to stakeholders, and manage project resources including team members, budget, and technology stack. Proactively identifying and mitigating project risks will be a key part of your role, along with tracking project progress and providing regular status updates to stakeholders. In terms of team management, you will build and lead a high-performing team of data engineers and analysts. Creating a collaborative and productive work environment by promoting open communication and effective problem-solving will be essential. You will assign tasks based on team member strengths and workload capacity, provide regular feedback, coaching, and support for team members" growth. Your technical skills should include experience managing data warehousing projects on platforms like Snowflake and a basic understanding of cloud computing infrastructure and platforms such as AWS, Azure, or GCP. Collaboration with the data architect to ensure the data architecture can handle continuous data growth and complexity will also be part of your responsibilities. Maintaining clear and consistent communication with stakeholders, facilitating collaboration across cross-functional teams, and resolving data-related issues effectively are crucial aspects of your role. You will contribute to the Cloud Center of Excellence initiatives, share knowledge and best practices within the Cloud solutioning domain, and foster a culture of continuous learning and collaborative problem-solving within the team. Recruiting and onboarding new talent to strengthen the Cloud practice, implementing coaching and mentoring programs to upskill the team, and fostering cross-functional collaboration to achieve project goals and milestones will also be part of your responsibilities. Motivating and inspiring the project team, making clear and informed decisions under pressure, managing project costs effectively, and ensuring timely project completion within defined timelines and budget constraints are key components of your role. If you are passionate about driving innovation and excellence in data project management, and if you thrive in a dynamic and collaborative work environment, then this role at Myridius is the perfect opportunity for you to make a significant impact in the rapidly evolving landscapes of technology and business. Visit www.myridius.com to learn more and be a part of our transformative journey in helping businesses thrive in a world of continuous change.,
Posted 1 week ago
5.0 - 9.0 years
0 Lacs
karnataka
On-site
As a Senior Analyst-Qlik Sense Developer at Alexion, you will have the opportunity to leverage your expertise in data analysis and data visualization tools to transform raw data into actionable insights. Your role will be crucial in designing, developing, and maintaining data reporting solutions and analytics platforms that drive informed decision-making and support strategic initiatives within the organization. Your primary accountabilities will include supporting the Alexion team with field force reporting by designing, developing, validating, and maintaining Qlik Sense dashboards for various business units and indications. You will work closely with stakeholders to understand business objectives, data sources, and key performance indicators in order to design effective solutions. Additionally, you will be responsible for designing and implementing data models in QlikSense, including data extraction, transformation, and loading processes using SQL scripting language. In this role, you will integrate data from multiple sources to ensure accuracy, consistency, and optimal performance of the analytics platforms. You will develop interactive dashboards, reports, and visualizations using Qlik Sense, identifying and addressing performance bottlenecks to optimize user experiences. Collaboration with cross-functional teams, conducting thorough testing of Qlik applications, and communicating project status and recommendations to stakeholders will be essential aspects of your responsibilities. To excel in this role, you should possess advanced understanding/experience with SQL, Snowflake, and Veeva CRM, along with expertise in Qlik scripting language, data modeling concepts, and data visualization tools. Desirable skills include a background in computer science or related field, 5-6 years of experience in reporting and visualization applications, and proficiency in web development technologies such as JavaScript and CSS. Strong analytical, problem-solving, communication, and interpersonal skills are also key requirements for this position. Join us at AstraZeneca's Alexion division where your work is not just a job, but a mission to make a real difference in the lives of patients worldwide. We offer a dynamic and inclusive environment where you can grow both personally and professionally, supported by exceptional leaders who value diversity and innovation. If you are ready to make an impact and contribute to our mission, apply now to join our team and be a part of our unique and ambitious world.,
Posted 1 week ago
2.0 - 6.0 years
0 Lacs
karnataka
On-site
At PwC, our managed services team focuses on providing outsourced solutions and support to clients across various functions. We help organizations streamline operations, reduce costs, and enhance efficiency by managing key processes and functions on their behalf. Our team is skilled in project management, technology, and process optimization to deliver high-quality services to clients. Those in managed service management and strategy at PwC are responsible for transitioning and running services, managing delivery teams, programs, commercials, performance, and delivery risk. Your role will involve continuous improvement and optimization of managed services processes, tools, and services. As an Associate at PwC, you will work as part of a team of problem solvers, assisting in solving complex business issues from strategy to execution. Professional skills and responsibilities at this level include using feedback and reflection to develop self-awareness, demonstrating critical thinking, and bringing order to unstructured problems. You will be involved in ticket quality review, status reporting for projects, adherence to SLAs, incident management, change management, and problem management. Additionally, you will seek opportunities for exposure to different situations, environments, and perspectives, uphold the firm's code of ethics, demonstrate leadership capabilities, and work in a team environment that includes client interactions and cross-team collaboration. Required Skills: - AWS Cloud Engineer - Minimum 2 years of hands-on experience in building advanced data warehousing solutions on leading cloud platforms - Minimum 1-3 years of Operate/Managed Services/Production Support Experience - Extensive experience in developing scalable, repeatable, and secure data structures and pipelines - Designing and implementing data pipelines for data ingestion, processing, and transformation in AWS - Building efficient ETL/ELT processes using industry-leading tools like AWS, PySpark, SQL, Python, etc. - Implementing data validation and cleansing procedures - Monitoring and troubleshooting data pipelines - Implementing and maintaining data security and privacy measures - Strong communication, problem-solving, quantitative, and analytical abilities Nice To Have: - AWS certification In our Managed Services platform, we deliver integrated services and solutions grounded in deep industry experience and powered by talent. Our team provides scalable solutions that add value to our clients" enterprise through technology and human-enabled experiences. We focus on empowering clients to navigate and capture the value of their Data & Analytics portfolio while cost-effectively operating and protecting their solutions. As a member of our Data, Analytics & Insights Managed Service team, you will work on critical offerings, help desk support, enhancement, optimization work, and strategic roadmap and advisory level work. Your contribution will be crucial in supporting customer engagements both technically and relationally.,
Posted 1 week ago
3.0 - 7.0 years
0 Lacs
chennai, tamil nadu
On-site
You should have a minimum of 3 years of experience in a similar role. You must be proficient in Java and Python programming languages. A strong understanding and working experience in Solidatus is required. Additionally, you should have a solid understanding of XML and JSON data formats. Knowledge of relational SQL and NoSQL databases such as Oracle, MSSQL, and Snowflake is essential. Preferred qualifications include exposure to NLP and LLM technologies and approaches, experience with machine learning and data mining techniques, familiarity with data security and privacy concerns, knowledge of data warehousing and business intelligence concepts, and an advanced degree in Computer Science, Engineering, or a related field. The ideal candidate will have a Bachelor's degree in Computer Science, Engineering, or a related field.,
Posted 1 week ago
3.0 - 7.0 years
0 Lacs
chennai, tamil nadu
On-site
As a Data Engineer specializing in Databricks, your primary responsibility will be to develop, support, and drive end-to-end business intelligence solutions using Databricks. You will collaborate with business analysts and data architects to transform requirements into technical implementations. Your role will involve designing, developing, implementing, and maintaining PySpark code through the Databricks UI to facilitate data and analytics use cases for the client. Additionally, you will code, test, and document new or enhanced data systems to build robust and scalable applications for data analytics. You will also delve into performance, scalability, capacity, and reliability issues to identify and address any arising challenges. Furthermore, you will engage in research projects and proof of concepts to enhance data processing capabilities. Key Requirements: - 3+ years of hands-on experience with Databricks and PySpark. - Proficiency in SQL and adept data manipulation skills. - Sound understanding of data warehousing concepts and technologies. - Familiarity with Google Pub sub, Kafka, or Mongo DB is a plus. - Knowledge of ETL processes and tools for data extraction, transformation, and loading would be beneficial. - Experience with cloud platforms such as Databricks, Snowflake, or Google Cloud. - Understanding of data governance and data quality best practices. Qualifications: - Bachelor's degree in computer science, engineering, or a related field. - Continuous learning demonstrated through technical certifications or related methods. - 3+ years of relevant experience in Data Analytics, preferably within the Retail domain. Desired Qualities: - Self-motivated and dedicated to achieving outcomes for a rapidly growing team and organization. - Effective communication skills through verbal, written, and client presentations. Location: India Years of Experience: 3 to 5 years In this role, your expertise in Databricks and data engineering will play a crucial part in driving impactful business intelligence solutions and contributing to the growth and success of the organization.,
Posted 1 week ago
7.0 - 11.0 years
0 Lacs
thiruvananthapuram, kerala
On-site
As a skilled Data Engineer with 7-10 years of experience, you will be a valuable addition to our dynamic team in India. Your primary focus will involve designing and optimizing data pipelines to efficiently handle large datasets and extract valuable business insights. Your responsibilities will include designing, building, and maintaining scalable data pipelines and architecture. You will be expected to develop and enhance ETL processes for data ingestion and transformation, collaborating closely with data scientists and analysts to meet data requirements and deliver effective solutions. Monitoring data integrity through data quality checks and ensuring compliance with data governance and security policies will also be part of your role. Leveraging cloud-based data technologies and services for storage and processing will be crucial to your success in this position. To excel in this role, you should hold a Bachelor's or Master's degree in Computer Science, Engineering, or a related field. Proficiency in SQL and practical experience with databases such as MySQL, PostgreSQL, or Oracle is essential. Your expertise in programming languages like Python, Java, or Scala will be highly valuable, along with hands-on experience in big data technologies like Hadoop, Spark, or Kafka. Familiarity with cloud platforms such as AWS, Azure, or Google Cloud is preferred. Understanding data warehousing concepts and tools such as Redshift and Snowflake, coupled with experience in data modeling and architecture design, will further strengthen your candidacy.,
Posted 1 week ago
7.0 - 11.0 years
0 Lacs
karnataka
On-site
You will be working as a Data Engineer with expertise in Python and Pyspark programming. You should have a strong background in utilizing Cloud services such as Snowflake, Databricks, Informatica, Azure, AWS, GCP, as well as proficiency in Reporting technologies like PowerBI, Tableau, Spotfire, Alteryx, and Microstrategy. Your responsibilities will include developing and maintaining data pipelines, optimizing data workflows, and ensuring the efficiency and reliability of data integration processes. You will be expected to possess strong programming skills in Python and Pyspark, along with a deep understanding of SQL. It is essential for you to have experience in utilizing Snowflake, Databricks, PowerBI, Microstrategy, Tableau, and Spotfire. Additionally, familiarity with Informatica and Azure/AWS services would be advantageous. The interview process will be conducted virtually, and the work model for this position is remote. If you have 7-10 years of experience in this field and are available to start within 15 days, please consider applying for this opportunity by sending your resume to netra.s@twsol.com.,
Posted 1 week ago
12.0 - 16.0 years
0 Lacs
hyderabad, telangana
On-site
You are a highly skilled Architect with expertise in Snowflake Data Modeling and Cloud Data solutions. With over 12 years of experience in Data Modeling/Data Warehousing and 5+ years specifically in Snowflake, you will lead Snowflake optimizations at warehouse and database levels. Your role involves setting up, configuring, and deploying Snowflake components efficiently for various projects. You will work with a passionate team of engineers at ValueMomentum's Engineering Center, focused on transforming the P&C insurance value chain through innovative solutions. The team specializes in Cloud Engineering, Application Engineering, Data Engineering, Core Engineering, Quality Engineering, and Domain expertise. As part of the team, you will have opportunities for role-specific skill development and contribute to impactful projects. As an Architect, you will be responsible for optimizing Snowflake at both warehouse and database levels, setting up and configuring Snowflake components, and implementing cloud management frameworks. Proficiency in Python, PySpark, SQL, and experience with cloud platforms such as AWS, Azure, and GCP are essential for this role. Key Responsibilities: - Work on Snowflake optimizations at warehouse and database levels. - Setup, configure, and deploy Snowflake components like Databases, Warehouses, and Roles. - Setup and monitor data shares and Snow Pipes for Snowflake projects. - Implement Snowflake Cloud management frameworks for monitoring, alerting, governance, budgets, change management, and cost optimization. - Develop cloud usage reporting for cost-related insights, metrics, and KPIs. - Build and enhance Snowflake forecasting processes and explore cloud spend trends. Requirements: - 12+ years of experience in Data Modeling/Data Warehousing. - 5+ years of experience in Snowflake Data Modeling and Architecture, including expertise in Cloning, Data Sharing, and Search optimization. - Proficiency in Python, PySpark, and complex SQL for analysis. - Experience with cloud platforms like AWS, Azure, and GCP. - Knowledge of Snowflake performance management and cloud-based database role management. ValueMomentum is a leading solutions provider for the global property and casualty insurance industry. It focuses on helping insurers achieve sustained growth, high performance, and stakeholder value. The company has served over 100 insurers and is dedicated to fostering resilient societies. Benefits at ValueMomentum include a competitive compensation package, career advancement opportunities through coaching and mentoring programs, comprehensive training and certification programs, and performance management with goal setting, continuous feedback, and rewards for exceptional performers.,
Posted 1 week ago
2.0 - 6.0 years
0 Lacs
karnataka
On-site
As a Data Product Analyst at Wells Fargo, you will be responsible for participating in low to moderate complexity data product initiatives. Your role will involve identifying opportunities for data roadmap improvements within your scope of responsibilities to drive data enablement and capabilities across platforms and utilities. You will review and analyze basic business, operational, or technical assignments that require research and evaluation to drive data enablement strategies. Additionally, you will present recommendations for resolving data product situations, collaborate with stakeholders to understand business requirements, and manage datasets focusing on consumer needs and data governance standards. Moreover, you will participate in the creation and maintenance of data product roadmaps, gather data requirements, and communicate data problems and initiatives effectively to all audiences. Required qualifications include 2+ years of data product or data management experience, or equivalent demonstrated expertise in maintaining and improving data quality across the organization. Your responsibilities will also involve participating in analysis to identify and remediate data quality issues, adhering to data governance standards, and designing data governance and data quality policies. Furthermore, you will support regulatory analysis and reporting requirements, work with business and technology partners to document metadata about systems, and assess the current state of data quality. Desired qualifications for this role include experience in large enterprise data initiatives, managing data entry processes, resolving data quality issues, banking business or technology experience, and familiarity with BI tools and cloud concepts. In addition, knowledge of T-SQL, database, data warehousing, ETL concepts, BI solutions, Agile principles, and various technical skills are preferred for this position. As a Data Product Analyst, you are expected to assist in implementing data processes, monitor data flows, ensure consistent data definition across systems, collaborate with data engineers, and resolve data quality issues. The posting end date for this job is 17 Jul 2025, with the possibility of early closure due to the volume of applicants. Wells Fargo values equal opportunity and encourages applications from all qualified candidates. The company maintains a drug-free workplace and requires candidates to represent their own experiences during the recruiting and hiring process. If you require a medical accommodation during the application or interview process, you can visit Disability Inclusion at Wells Fargo for assistance. Third-party recordings are prohibited unless authorized by Wells Fargo, and candidates should adhere to the company's recruitment and hiring requirements.,
Posted 1 week ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
32455 Jobs | Dublin
Wipro
16590 Jobs | Bengaluru
EY
11025 Jobs | London
Accenture in India
10991 Jobs | Dublin 2
Amazon
8878 Jobs | Seattle,WA
Uplers
8715 Jobs | Ahmedabad
IBM
8204 Jobs | Armonk
Oracle
7750 Jobs | Redwood City
Capgemini
6181 Jobs | Paris,France
Muthoot FinCorp (MFL)
6170 Jobs | New Delhi