Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
5.0 - 9.0 years
0 Lacs
coimbatore, tamil nadu
On-site
You will be joining IntelliDash as a Data Engineering Architect in Coimbatore on a full-time on-site basis. Your primary responsibility will be to design and manage data architectures, develop data models, and ensure data governance. You will oversee Extract Transform Load (ETL) processes, maintain data warehouses, and collaborate with analytics and development teams to uphold data integrity and efficiency. To excel in this role, you should have a strong background in Data Governance and Data Architecture, along with proficiency in Data Modeling and ETL processes. Expertise in Data Warehousing is essential, coupled with excellent analytical and problem-solving skills. Your communication and collaboration abilities will be crucial in working independently and alongside a team. Prior experience in the manufacturing analytics industry would be advantageous. A Bachelor's or Master's degree in Computer Science, Information Technology, or a related field is required.,
Posted 2 weeks ago
2.0 - 6.0 years
0 Lacs
thiruvananthapuram, kerala
On-site
Armada is an edge computing startup dedicated to providing computing infrastructure to remote areas with limited connectivity and cloud infrastructure, as well as locations requiring local data processing for real-time analytics and AI at the edge. We are on a mission to bridge the digital divide by deploying advanced technology infrastructure rapidly. To further this mission, we are seeking talented individuals to join our team. As a BI Engineer at Armada, you will play a crucial role in designing, building, and maintaining robust data pipelines and visualization tools. Your focus will be on empowering data-driven decision-making throughout the organization by collaborating closely with stakeholders to translate business requirements into actionable insights through the development and optimization of BI solutions. Key Responsibilities: - Design, develop, and maintain scalable ETL pipelines to facilitate data integration from multiple sources. - Construct and optimize data models and data warehouses to support business reporting and analysis. - Create dashboards, reports, and data visualizations using BI tools such as Power BI, Tableau, or Looker. - Collaborate with data analysts, data scientists, and business stakeholders to understand reporting needs and deliver effective solutions. - Ensure data accuracy, consistency, and integrity within reporting systems. - Perform data validation, cleansing, and transformation as needed. - Identify opportunities for process automation and enhance reporting efficiency. - Monitor BI tools and infrastructure performance, troubleshooting issues when necessary. - Stay updated on emerging BI technologies and best practices. Required Qualifications: - Bachelor's degree in Computer Science, Information Systems, Data Science, or a related field. - 2-4 years of experience as a BI Engineer, Data Engineer, or similar role. - Proficiency in SQL with experience in data modeling and data warehousing (e.g., Snowflake, Redshift, BigQuery). - Familiarity with BI and data visualization tools (e.g., Power BI, Tableau, Qlik, Looker). - Strong understanding of ETL processes and data pipeline design. - Excellent problem-solving skills and attention to detail. Preferred Skills: - Experience with Python, R, or other scripting languages for data manipulation. - Knowledge of cloud platforms (e.g., AWS, Azure, Google Cloud Platform). - Understanding of version control (e.g., Git) and CI/CD practices. - Familiarity with APIs, data governance, and data cataloging tools. At Armada, we offer a competitive base salary and equity options, providing you with the opportunity to share in our success and growth. If you are intellectually curious, possess strong business acumen, and thrive in a fast-paced, collaborative environment, we encourage you to apply. Join us in making a difference and contributing to the success of Armada.,
Posted 2 weeks ago
3.0 - 7.0 years
0 Lacs
chandigarh
On-site
We are seeking a Data Scientist with over 3 years of experience in Machine Learning, Deep Learning, and Large Language Models (LLMs) to join our team at SparkBrains Private Limited in Chandigarh. As a Data Scientist, you will be responsible for leveraging your analytical skills and expertise in data modeling to develop and deploy AI-driven solutions that provide value to our business and clients. Your key responsibilities will include gathering, cleaning, and preparing data for model training, designing and optimizing machine learning and deep learning models, integrating Large Language Models (LLMs) for NLP tasks, identifying relevant features for model accuracy, conducting rigorous model evaluation and optimization, creating data visualizations and insights for stakeholder communication, developing and deploying APIs, and collaborating with cross-functional teams while documenting processes effectively. To qualify for this role, you must hold a Bachelors or Masters degree in Computer Science, Data Science, AI, Machine Learning, or a related field, along with a minimum of 3 years of experience as a Data Scientist or AI Engineer. You should also possess proficiency in Python and relevant ML/AI libraries, hands-on experience with LLMs, a strong understanding of NLP, neural networks, and deep learning architectures, knowledge of data wrangling and visualization techniques, experience with APIs and cloud platforms, analytical and problem-solving skills, as well as excellent communication skills for effective collaboration. By joining our team, you will have the opportunity to work on cutting-edge AI/ML projects, be part of a collaborative work environment that emphasizes continuous learning, gain exposure to diverse industries and domains, and benefit from competitive salary and growth opportunities. This is a full-time, permanent position with a day shift schedule from Monday to Friday, requiring in-person work at our Chandigarh office.,
Posted 2 weeks ago
5.0 - 9.0 years
0 Lacs
hyderabad, telangana
On-site
As a Senior Data Analyst at our company based in Hyderabad, IN, with over 7 years of experience, you will play a crucial role in leveraging your expertise in SQL, data architecture, and the Google Cloud platform (GCP) ecosystem. Your primary responsibility will be to transform complex business queries into actionable insights, shaping strategic decisions, and contributing to the future direction of our Product/Operations team. It is essential to have a combination of technical proficiency, analytical rigor, and exceptional communication skills to collaborate effectively with engineering, product, and business stakeholders. Your key responsibilities will include advanced data analysis using SQL to query, analyze, and manipulate large datasets, creating and maintaining scalable dashboards and reports, managing source code effectively using platforms like GitHub, partnering with product managers to address critical business questions, collaborating with data engineers on data architecture and pipelines, translating complex data findings into compelling narratives for various audiences, leading analytical projects from inception to delivery, and mentoring junior analysts to foster a data-driven problem-solving culture. The ideal candidate should hold a Bachelor's degree in a quantitative field such as Computer Science or Statistics, possess a minimum of 5 years of experience in data analysis or business intelligence, demonstrate expert-level proficiency in SQL, have a solid understanding of data architecture and data modeling principles, excel in communication and stakeholder influencing, be familiar with business intelligence tools like Tableau or Looker, have experience with Google data tools like BigQuery, and exhibit a strong sense of curiosity and passion for data insights. In this role, you will lead a team of data scientists and analysts, oversee the development of data models and algorithms for new product initiatives, provide strategic direction for data science projects aligned with business objectives, collaborate with cross-functional teams to integrate data science solutions, analyze complex datasets for trends and patterns, utilize generative AI techniques, ensure adherence to ITIL V4 practices, mentor team members, monitor project progress, drive continuous improvement in data science methodologies, and foster a culture of innovation and collaboration within the team. To qualify for this role, you should have a solid background in business analysis and data analysis, expertise in generative AI, understanding of ITIL V4 practices, excellent communication and collaboration skills, proficiency in team management, and a commitment to working from the office during day shifts.,
Posted 2 weeks ago
5.0 - 9.0 years
0 - 0 Lacs
karnataka
On-site
Overview of the Company: 66degrees is a leading consulting and professional services company specializing in developing AI-focused, data-led solutions leveraging the latest advancements in cloud technology. With unmatched engineering capabilities and vast industry experience, we assist the world's leading brands in transforming their business challenges into opportunities and shaping the future of work. At 66degrees, the values of embracing challenges and winning together guide us not only in achieving company goals but also in creating a significant impact for our employees. We are dedicated to fostering a culture that sparks innovation and supports professional and personal growth. Overview of Role: We are looking for an experienced Data Architect to design, develop, and maintain our Google Cloud data architecture. The ideal candidate will possess a strong background in data architecture, data engineering, and cloud technologies, with expertise in managing data across Google Cloud platforms. Responsibilities: - GCP Cloud Architecture: Design, implement, and manage robust, scalable, and cost-effective cloud-based data architectures on Google Cloud Platform (GCP), utilizing services like BigQuery, Cloud Dataflow, Cloud Pub/Sub, Cloud Storage, Cloud DataProc, Cloud Run, and Cloud Composer. Experience in designing cloud architectures on Oracle Cloud is advantageous. - Data Modeling: Develop and maintain conceptual, logical, and physical data models to support various business needs. - Big Data Processing: Design and implement solutions for processing large datasets using technologies such as Spark and Hadoop. - Data Governance: Establish and enforce data governance policies, including data quality, security, compliance, and metadata management. - Data Pipelines: Build and optimize data pipelines for efficient data ingestion, transformation, and loading. - Performance Optimization: Monitor and tune data systems to ensure high performance and availability. - Collaboration: Work closely with data engineers, data scientists, and other stakeholders to understand data requirements and provide architectural guidance. - Innovation: Stay updated with the latest technologies and trends in data architecture and cloud computing. Qualifications: - GCP Core Services: In-depth knowledge of GCP data services, including BigQuery, Cloud Dataflow, Cloud Pub/Sub, Cloud Storage, Cloud DataProc, Cloud Run, and Cloud Composer. - Data Modeling: Expertise in data modeling techniques and best practices. - Big Data Technologies: Hands-on experience with Spark and Hadoop. - Cloud Architecture: Proven ability to design scalable, reliable, and cost-effective cloud architectures. - Data Governance: Understanding of data quality, security, compliance, and metadata management. - Programming: Proficiency in SQL, Python, and DBT (Data Build Tool). - Problem-Solving: Strong analytical and problem-solving skills. - Communication: Excellent written and verbal communication skills. - A Bachelor's degree in Computer Science, Computer Engineering, Data, or related field is required, or equivalent work experience. - GCP Professional Data Engineer or Cloud Architect certification is a plus.,
Posted 2 weeks ago
8.0 - 12.0 years
0 Lacs
noida, uttar pradesh
On-site
You will be responsible for utilizing Apex to execute flow and transaction control statements on Salesforce servers, in coordination with calls to the API. With over 8 years of relevant experience, you will leverage Lightning Component, Visualforce, and JavaScript UI frameworks to develop single page applications for both desktop and mobile platforms within the Salesforce application. Additionally, you will be tasked with using various web services such as SOAP API, REST API, Bulk API, and Metadata API to integrate Salesforce with external systems, as well as creating APIs that can be consumed by external applications. Your proficiency in SOQL and SOSL Salesforce database languages will be crucial in conducting searches on Salesforce data through field-based and text-based search queries. Your role will require a strong understanding of code optimization, various design pattern techniques, data modeling, and backend logic using Apex. Furthermore, expertise in Lightning web components, experience with Version control software (GIT, SVN, etc.), and familiarity with working in an agile environment will be essential. The ability to effectively deliver against multiple initiatives simultaneously, prioritize tasks efficiently, and demonstrate excellent written and verbal communication skills, analytical capabilities, and troubleshooting abilities will be key to success in this role. Please note that the project shift timings for this position are from 5PM to 2AM IST.,
Posted 2 weeks ago
5.0 - 9.0 years
0 Lacs
pune, maharashtra
On-site
Choosing Capgemini means choosing a company where you will be empowered to shape your career in the way you'd like, where you'll be supported and inspired by a collaborative community of colleagues around the world, and where you'll be able to reimagine what's possible. Join us and help the world's leading organizations unlock the value of technology and build a more sustainable, more inclusive world. Experience in developing digital marketing/digital analytics solutions using Adobe products is essential for this role. You should have experience in Adobe Experience Cloud products and recent experience with Adobe Experience Platform or a similar CDP. Good knowledge of the Data Science workspace and building intelligent Services on AEP is required. You should also have a strong understanding of datasets in Adobe Experience Platform, including loading data into the Platform through data source connectors, APIs, and streaming ingestion connectors. Furthermore, experience in creating all required Adobe XDM (Experience Data Model) in JSON based on the approved data model for all loading data files is necessary. Knowledge on utilizing Adobe Experience Platform (AEP) UI & POSTMAN to automate all customer schema data lake & profile design setups within each sandbox environment is also expected. Additionally, you should have experience in configuration within Adobe Experience Platform for all necessary identities & privacy settings and creating new segments within AEP to meet customer use cases. It is important to be able to test/validate the segments with the required destinations. Managing customer data by using Real-Time Customer Data Platform (RTCDP) and analyzing customer data by using Customer Journey Analytics (CJA) are key responsibilities of this role. You are required to have experience with creating connections, data views, and dashboards in CJA. Hands-on experience in the configuration and integration of Adobe Marketing Cloud modules like Audience Manager, Analytics, Campaign, and Target is also essential. Adobe Experience Cloud tool certifications (Adobe Campaign, Adobe Experience Platform, Adobe Target, Adobe Analytics) are desirable for this position. Proven ability to communicate verbally and in writing in a high-performance, collaborative environment is expected. Experience with data analysis, modeling, and mapping to coordinate closely with Data Architect(s) is also a part of this role. At Capgemini, you can shape your career with a range of career paths and internal opportunities within the Capgemini group. Comprehensive wellness benefits, including health checks, telemedicine, insurance with top-ups, elder care, partner coverage, or new parent support via flexible work, are provided. You will have the opportunity to learn on one of the industry's largest digital learning platforms, with access to 250,000+ courses and numerous certifications. Capgemini is a global business and technology transformation partner, helping organizations accelerate their dual transition to a digital and sustainable world while creating tangible impact for enterprises and society. With over 55 years of heritage, Capgemini is trusted by clients to unlock the value of technology to address the entire breadth of their business needs. It delivers end-to-end services and solutions leveraging strengths from strategy and design to engineering, all fueled by market-leading capabilities in AI, generative AI, cloud, and data, combined with deep industry expertise and a partner ecosystem.,
Posted 2 weeks ago
8.0 - 12.0 years
0 Lacs
karnataka
On-site
You are a strategic thinker passionate about driving solutions in financial analysis. You have found the right team. As a Data Domain Architect Lead - Vice President within the Finance Data Mart team, you will be responsible for overseeing the design, implementation, and maintenance of data marts to support our organization's business intelligence and analytics initiatives. You will collaborate with business stakeholders to gather and understand data requirements, translating them into technical specifications. You will lead the development of robust data models to ensure data integrity and consistency, and oversee the implementation of ETL processes to populate data marts with accurate and timely data. You will optimize data mart performance and scalability, ensuring high availability and reliability, while mentoring and guiding a team of data mart developers. Lead the design and development of data marts, ensuring alignment with business intelligence and reporting needs. Collaborate with business stakeholders to gather and understand data requirements, translating them into technical specifications. Develop and implement robust data models to support data marts, ensuring data integrity and consistency. Oversee the implementation of ETL (Extract, Transform, Load) processes to populate data marts with accurate and timely data. Optimize data mart performance and scalability, ensuring high availability and reliability. Monitor and troubleshoot data mart issues, providing timely resolutions and improvements. Document data mart structures, processes, and procedures, ensuring knowledge transfer and continuity. Mentor and guide a team of data mart developers if needed, fostering a collaborative and innovative work environment. Stay updated with industry trends and best practices in data warehousing, data modeling, and business intelligence. Required qualifications, capabilities, and skills: - Bachelor's or Master's degree in Computer Science, Information Technology, or a related field. - Extensive experience in data warehousing, data mart development, and ETL processes. - Strong expertise in Data Lake, data modeling and database management systems (e.g., Databricks, Snowflake, Oracle, SQL Server, etc.). - Leadership experience, with the ability to manage and mentor a team. - Excellent problem-solving skills and attention to detail. - Strong communication and interpersonal skills to work effectively with cross-functional teams. Preferred qualifications, capabilities, and skills: - Experience with cloud-based data solutions (e.g., AWS, Azure, Google Cloud). - Familiarity with advanced data modeling techniques and tools. Knowledge of data governance, data security, and compliance practices. - Experience with business intelligence tools (e.g., Tableau, Power BI, etc.). Candidates must be able to physically work in our Bengaluru Office in evening shift - 2 PM to 11PM IST. The specific schedule will be determined and communicated by direct management.,
Posted 2 weeks ago
5.0 - 10.0 years
35 - 45 Lacs
Bengaluru
Hybrid
Expert in the Operating Model & AJG data governance, SOP’s for Collibra, Collibra Data Catalog KPI, manual stitching of assets in Collibra, Technical skills. Workflow Design & Stakeholder Management, Hands on exp in Data Governance & Collibra. Required Candidate profile Implementation, configuration, and maintenance of the Collibra Data Governance Platform Stewards, data owners, stakeholders, data governance, quality & integration principles Collibra Ranger Certified
Posted 2 weeks ago
5.0 - 9.0 years
0 Lacs
vadodara, gujarat
On-site
As a Backend Developer, you will be responsible for designing and developing complex backend features using the Laravel framework (version 9+) while adhering to SOLID principles and clean architecture patterns. Your role will involve building and maintaining RESTful APIs for mobile applications, web frontends, and external integrations, ensuring proper versioning and backwards compatibility. In addition, you will implement robust queue-based job processing systems using Laravel Queues (Redis/Database) to handle asynchronous operations efficiently. Your responsibilities will also include designing and optimizing complex database schemas with MySQL, focusing on advanced relationship modeling, query optimization, stored procedures, triggers, and materialized view maintenance. It will be crucial for you to create comprehensive automated test suites, including unit tests, integration tests, and feature tests using PHPUnit, to maintain high code coverage and ensure system reliability. Furthermore, you will work on optimizing application performance through caching strategies (Redis), database query optimization, and efficient memory management for high-volume operations. Collaboration with frontend developers to design efficient API contracts and ensure seamless integration between backend services and user interfaces will be an essential part of your role. Troubleshooting and debugging production issues using logging, monitoring tools, and performance profiling to maintain system stability and performance will also be within your scope of work. To succeed in this role, you will need to possess strong analytical thinking skills for analyzing complex business requirements and translating them into scalable technical solutions. Excellent communication skills, both written and verbal, will be necessary for effective collaboration with cross-functional teams, documenting technical decisions, and participating in code reviews. Attention to detail, especially regarding code quality, data integrity, and system reliability, will be crucial. You should also have a good understanding of performance optimization techniques, learning agility to adapt to new technologies and frameworks quickly, and a commitment to writing clean, maintainable, well-documented code with comprehensive test coverage. Experience with third-party API integrations, webhook handling, and building resilient systems that can handle external service failures gracefully will be beneficial. An understanding of data modeling, ETL processes, and efficient handling of large datasets with proper validation and transformation will also be required. Moreover, knowledge of web application security best practices and familiarity with deployment processes, environment management, monitoring, logging, and maintaining production systems are essential for this role.,
Posted 2 weeks ago
3.0 - 7.0 years
0 Lacs
maharashtra
On-site
You are currently recruiting for a Database Engineer to join our software engineering team. As a Database Engineer, you will play a crucial role in developing high-performing, scalable, enterprise-grade data-driven applications. Your responsibilities will include designing and developing high-volume, low-latency applications for mission-critical systems, ensuring high availability and performance. You will contribute to all phases of the development lifecycle, write efficient and testable code, participate in code reviews, and lead team refactoring efforts to enhance processes. To qualify for this position, you should have at least 3 years of experience working as a database engineer or in a related role. You must possess strong SQL expertise and a deep understanding of various database objects such as tables, views, functions, stored procedures, and triggers. Experience in data modeling, data warehousing architecture, SQL server administration, database tuning, ETL processes, and database operations best practices is essential. You should be familiar with troubleshooting potential issues, testing/tracking bugs at the raw data level, and working in an Agile development process using tools like JIRA, Bamboo, and git. Preferred qualifications include a degree in computer science or a related technical field, experience with MySQL and Microsoft SQL Server, and proficiency in Python. Additionally, you should have experience working with stakeholders to gather requirements, handling production systems, and demonstrating a strong desire to learn new technologies. A growth mentality and motivation to become a key member of the team are also important attributes for this role. The job is located in Mumbai and offers free pickup & drop cab and food facilities. If you meet the qualification criteria and are interested in joining our team, please share your updated resume to careers@accesshealthcare.com. For further details, you can contact HR- Rathish at +91-91762-77733. Venue: Access Healthcare Services Pvt Ltd Empire Tower, 14th floor, D wing, Reliable Cloud City, Gut no-31, Thane - Belapur Road, Airoli, Navi Mumbai, Maharashtra 400708. Employment Type: Full Time Role: Group Leader - Business Analyst Industry: BPO, Call Centre, ITES Salary: Best in the industry Function: ITES, BPO, KPO, LPO, Customer Service, Operations Experience: 1 - 4 Years Please note that the responsibilities and qualifications mentioned in the job description are subject to change based on the requirements of the organization.,
Posted 2 weeks ago
3.0 - 7.0 years
0 Lacs
maharashtra
On-site
The Microsoft Cloud Data Engineer role is a great opportunity for a talented and motivated individual to design, construct, and manage cloud-based data solutions using Microsoft Azure technologies. Your primary responsibility will be to create strong, scalable, and secure data pipelines and support analytics workloads that drive business insights and data-based decision-making. You will design and deploy ETL/ELT pipelines using Azure Data Factory, Azure Synapse Analytics, Azure Databricks, and Azure Data Lake Storage. Additionally, you will be responsible for developing and overseeing data integration workflows to bring in data from various sources such as APIs, on-prem systems, and cloud services. It will also be important to optimize and maintain SQL-based data models, views, and stored procedures in Azure SQL, SQL MI, or Synapse SQL Pools. Collaboration with analysts, data scientists, and business teams will be crucial to gather data requirements and provide reliable and high-quality datasets. You will need to ensure data quality, governance, and security by implementing robust validation, monitoring, and encryption mechanisms. Supporting infrastructure automation using Azure DevOps, ARM templates, or Terraform for resource provisioning and deployment will also be part of your responsibilities. You will also play a role in troubleshooting, performance tuning, and the continuous improvement of the data platform. To qualify for this position, you should have a Bachelor's degree in Computer Science, Engineering, Information Systems, or a related field. A minimum of 3 years of experience in data engineering with a focus on Microsoft Azure data services is required. Hands-on experience with Azure Data Factory, Azure Synapse Analytics, and Azure Data Lake is a must. Strong proficiency in SQL and data modeling is essential, along with experience in Python, PySpark, or .NET for data processing. Understanding of data warehousing, data lakes, and ETL/ELT best practices is important, as well as familiarity with DevOps tools and practices in an Azure environment. Knowledge of Power BI or similar visualization tools is also beneficial. Additionally, holding the Microsoft Certified: Azure Data Engineer Associate certification or its equivalent is preferred.,
Posted 2 weeks ago
5.0 - 10.0 years
0 Lacs
hyderabad, telangana
On-site
We are looking for an experienced and dedicated Senior Manager of Business Intelligence & Data Engineering to lead a team of engineers. In this role, you will oversee various aspects of the Business Intelligence (BI) ecosystem, including designing and maintaining data pipelines, enabling advanced analytics, and providing actionable insights through BI tools and data visualization. Your responsibilities will include leading the design and development of scalable data architectures on AWS, managing Data Lakes, implementing data modeling and productization, collaborating with business stakeholders to create actionable insights, ensuring thorough documentation of data pipelines and systems, promoting knowledge-sharing within the team, and staying updated on industry trends in data engineering and BI. You should have at least 10 years of experience in Data Engineering or a related field, with a strong track record in designing and implementing large-scale distributed data systems. Additionally, you should possess expertise in BI, data visualization, people management, CI/CD tools, cloud-based data warehousing, AWS services, Data Lake architectures, Apache Spark, SQL, enterprise BI platforms, and microservices-based architectures. Strong communication skills, a collaborative mindset, and the ability to deliver insights to technical and executive audiences are essential for this role. Bonus points will be awarded if you have knowledge of data science and machine learning concepts, experience with Infrastructure as Code practices, familiarity with data governance and security in cloud environments, and domain understanding of Apparel, Retail, Manufacturing, Supply Chain, or Logistics. If you are passionate about leading a high-performing team, driving innovation in data engineering and BI, and contributing to the success of a global sports platform like Fanatics Commerce, we welcome you to apply for this exciting opportunity.,
Posted 2 weeks ago
3.0 - 7.0 years
0 Lacs
maharashtra
On-site
You are an experienced Databricks on AWS and PySpark Engineer being sought to join our team. Your role will involve designing, building, and maintaining large-scale data pipelines and architectures using Databricks on AWS and optimizing data processing workflows with PySpark. Collaboration with data scientists and analysts to develop data models and ensure data quality, security, and compliance with industry standards will also be a key responsibility. Your main tasks will include troubleshooting data pipeline issues, optimizing performance, and staying updated on industry trends and emerging data engineering technologies. You should have at least 3 years of experience in data engineering with a focus on Databricks on AWS and PySpark, possess strong expertise in PySpark and Databricks for data processing, modeling, and warehousing, and have hands-on experience with AWS services like S3, Glue, and IAM. Your proficiency in data engineering principles, data governance, and data security is essential, along with experience in managing data processing workflows and data pipelines. Strong problem-solving skills, attention to detail, effective communication, and collaboration abilities are key soft skills required for this role, as well as the capability to work in a fast-paced and dynamic environment while adapting to changing requirements and priorities.,
Posted 2 weeks ago
5.0 - 8.0 years
5 - 8 Lacs
Noida
Work from Office
Position: Lead Finance FP&A / R2R / Financial Analyst Experience: 5-8 years Location: Noida Shift: General (Can be rotational) Communication: Excellent verbal and written skills required Role & Responsibilities: Manage a small team to ensure smooth operations Prepare and maintain reports and dashboards, including profitability analysis Automate reports and routine tasks for better efficiency Calculate commissions and bonuses, and resolve related queries Track project costs and monitor budgets Assist in surveys and respond to RFPs Create data models using BI tools Develop financial models like budgets and forecasts Preferred Background: Experience in FP&A, R2R process, or as a Financial Analyst If interested, please share your resume at Parul.singh1@artech.com Regards, Parul Singh
Posted 2 weeks ago
5.0 - 10.0 years
19 - 20 Lacs
Bengaluru
Remote
Hi Candidates, we have job openings in one of our MNC Company Interested candidates can apply here and share details to chandrakala.c@i-q.co Note: NP-0-15 days only serving Role & responsibilities We are looking for Data Managers Work Exp: Min 5 yrs. (mandatory) Location: Remote (India) JD: The data modeler designs, implements, and documents data architecture and data modeling solutions, which include the use of relational, dimensional, and NoSQL databases. These solutions support enterprise information management, business intelligence, machine learning, data science, and other business interests. The successful candidate will: - Be responsible for the development of the conceptual, logical, and physical data models, the implementation of RDBMS, operational data store (ODS), data marts, and data lakes on target platforms (SQL/NoSQL). - Oversee and govern the expansion of existing data architecture and the optimization of data query performance via best practices. The candidate must be able to work independently and collaboratively. Responsibilities - Implement business and IT data requirements through new data strategies and designs across all data platforms (relational, dimensional, and NoSQL) and data tools (reporting, visualization, analytics, and machine learning). - Work with business and application/solution teams to implement data strategies, build data flows, and develop conceptual/logical/physical data models - Define and govern data modeling and design standards, tools, best practices, and related development for enterprise data models. - Identify the architecture, infrastructure, and interfaces to data sources, tools supporting automated data loads, security concerns, analytic models, and data visualization. - Hands-on modeling, design, configuration, installation, performance tuning, and sandbox POC. - Work proactively and independently to address project requirements and articulate issues/challenges to reduce project delivery risks. Skills - Bachelors or masters degree in computer/data science technical or related experience. - 5+ years of hands-on relational, dimensional, and/or analytic experience (using RDBMS, dimensional, NoSQL data platform technologies, and ETL and data ingestion protocols). - Experience with data warehouse, data lake, and enterprise big data platforms in multi-datacenter contexts required. -Good knowledge of metadata management, data modeling, and related tools (Erwin or ER Studio or others) required. - Experience in team management, communication, and presentation. Preferred candidate profile
Posted 2 weeks ago
5.0 - 9.0 years
14 - 24 Lacs
Hyderabad
Hybrid
Experience:- Required: Bachelors degree in computer science or engineering. 7+ years of experience with data analytics, data modeling, and database design. 5+ years of experience with Vertica. 2+ years of coding and scripting (Python, Java, Scala) and design experience. 2+ years of experience with Airflow. Experience with ELT methodologies and tools. Experience with GitHub. Expertise in tuning and troubleshooting SQL. Strong data integrity, analytical and multitasking skills. Excellent communication, problem solving, organizational and analytical skills. Able to work independently. Additional / pAdditional/preferred skills: Familiar with agile project delivery process. Knowledge of SQL and use in data access and analysis. Ability to manage diverse projects impacting multiple roles and processes. Able to troubleshoot problem areas and identify data gaps and isissuessuein s. Ability to adapt to fast changing environment. Experience designing and implementing automated ETL processes. Experience with MicroStrategy reporting tool.
Posted 2 weeks ago
12.0 - 22.0 years
25 - 32 Lacs
Chennai, Bengaluru
Hybrid
Technical Manager, Business Intelligence Location : Chennai/Bangalore Experience : 15+ Years Employment Type : Full Time Role Description: We are seeking an experienced Technical Manager to lead our Business Intelligence function. This role is crucial for transforming raw data into actionable insights that drive strategic decision-making. The ideal candidate will be a thought leader in BI, adept at guiding a team, collaborating with stakeholders to understand business requirements, and leveraging advanced BI tools and methodologies to deliver impactful dashboards, reports, and analytical solutions. Responsibilities Drive the vision and strategy for Business Intelligence, promoting data-driven decision-making across the organization. Lead, mentor, and develop a team of BI developers and analysts, fostering expertise in data visualization, reporting, and analytical best practices. Oversee the design, development, and deployment of interactive dashboards, reports, and analytical applications that meet diverse business needs. Ensure that insights are presented clearly, concisely, and compellingly to various audiences, enabling effective business action. Work closely with pre-sales, sales, marketing, Data Engineering, Data Science, and other departments to identify key performance indicators (KPIs), define reporting requirements, and support data-driven initiatives. Collaborate with Data Engineering to ensure data accuracy, consistency, and reliability within BI solutions. Evaluate and recommend new BI tools, techniques, and platforms to enhance reporting capabilities and user experience. Tools & Technologies BI Platforms : Tableau, Power BI, Qlik Sense, Looker, DOMO Data Warehousing/Lakes : Snowflake, Google BigQuery, Amazon Redshift, MS Fabric SQL Databases : PostgreSQL, MySQL, SQL Server, Oracle. Data Modeling : Star Schema, Snowflake Schema, Data Vault. ETL/ELT Concepts : Understanding of data extraction, transformation, and loading processes. Programming Languages : SQL (advanced), Python (for data manipulation/analysis), R. Cloud Platforms : Experience with BI services on AWS, Azure, or GCP. Data Governance Tools : Collibra, MS Purview. Version Control: Git.
Posted 2 weeks ago
12.0 - 22.0 years
25 - 32 Lacs
Chennai, Bengaluru
Work from Office
Technical Manager, Data Engineering Location : Chennai/Bangalore Experience : 15+ Years Employment Type : Full Time Role Description: We are looking for a seasoned Technical Manager to lead our Data Engineering function. This role demands a deep understanding of data architecture, pipeline development, and data infrastructure. The ideal candidate will be a thought leader in the data engineering space, capable of guiding and mentoring a team, collaborating effectively with various business units, and driving the adoption of cutting-edge tools and technologies to build robust, scalable, and efficient data solutions. Responsibilities Define and champion the strategic direction for data engineering, staying abreast of industry trends and emerging technologies. Lead, mentor, and develop a high-performing team of data engineers, fostering a culture of technical excellence, innovation, and continuous learning. Design, implement, and maintain scalable, reliable, and secure data pipelines and infrastructure. Ensure data quality, integrity, and accessibility. Oversee the end-to-end delivery of data engineering projects, ensuring timely completion, adherence to best practices, and alignment with business objectives. Partner closely with pre-sales, sales, marketing, Business Intelligence, Data Science, and other departments to understand data needs, propose solutions, and support resource deployment for active data projects. Evaluate, recommend, and implement new data engineering tools, platforms, and methodologies to enhance capabilities and efficiency. Identify and address performance bottlenecks in data systems, ensuring optimal data processing and storage. Tools & Technologies Cloud Platforms : AWS (S3, Glue, EMR, Redshift, Athena, Lambda, Kinesis), Azure (Data Lake Storage, Data Factory, Databricks, Synapse Analytics), Google Cloud Platform (Cloud Storage, Dataflow, Dataproc, BigQuery). Big Data Frameworks : Apache Spark, Apache Flink, Apache Kafka, HDFS Data Warehousing/Lakes: Snowflake, Databricks Lakehouse, Google BigQuery, Amazon Redshift, Azure Synapse Analytics. ETL/ELT Tools : Apache Airflow, Talend, Informatica, DBT, Fivetran, Stitch. Data Modeling : Star Schema, Snowflake Schema, Data Vault. Databases : PostgreSQL, MySQL, MongoDB, Cassandra, DynamoDB. Programming Languages: Python (Pandas, PySpark), Scala, Java. Containerization/Orchestration : Docker, Kubernetes. Version Control : Git.
Posted 2 weeks ago
7.0 - 12.0 years
22 - 27 Lacs
Hyderabad, Pune, Mumbai (All Areas)
Work from Office
Job Description - Snowflake Developer Experience: 7+ years Location: India, Hybrid Employment Type: Full-time Job Summary We are looking for a Snowflake Developer with 7+ years of experience to design, develop, and maintain our Snowflake data platform. The ideal candidate will have strong expertise in Snowflake SQL, data modeling, and ETL/ELT processes to build efficient and scalable data solutions. Key Responsibilities 1. Snowflake Development & Implementation Design and develop Snowflake databases, schemas, tables, and views Write and optimize complex SQL queries, stored procedures, and UDFs Implement Snowflake features (Time Travel, Zero-Copy Cloning, Streams & Tasks) Manage virtual warehouses, resource monitors, and cost optimization 2. Data Pipeline & Integration Build and maintain ETL/ELT pipelines using Snowflake and tools like Snowpark, Python, or Spark Integrate Snowflake with cloud storage (S3, Blob Storage) and data sources (APIs) Develop data ingestion processes (batch and real-time) using Snowpipe 3. Performance Tuning & Optimization Optimize query performance through clustering, partitioning, and indexing Monitor and troubleshoot data pipelines and warehouse performance Implement caching strategies and materialized views for faster analytics 4. Data Modeling & Governance Design star schema, snowflake schema, and normalized data models Implement data security (RBAC, dynamic data masking, row-level security) Ensure data quality, documentation, and metadata management 5. Collaboration & Support Work with analysts, BI teams, and business users to deliver data solutions Document technical specifications and data flows Provide support and troubleshooting for Snowflake-related issues Required Skills & Qualifications 7+ years in database development, data warehousing, or ETL 3+ years of hands-on Snowflake development experience Strong SQL and scripting (Python, Bash) skills Experience with Snowflake utilities (SnowSQL, Snowsight) Knowledge of cloud platforms (AWS, Azure) and data integration tools SnowPro Core Certification (preferred but not required) Experience with Coalesce DBT , Airflow, or other data orchestration tools Familiarity with CI/CD pipelines and DevOps practices Knowledge of data visualization tools (Power BI, Tableau)
Posted 2 weeks ago
7.0 - 9.0 years
20 - 25 Lacs
Hyderabad
Work from Office
At YASH, we re a cluster of the brightest stars working with cutting-edge technologies. Our purpose is anchored in a single truth bringing real positive changes in an increasingly virtual world and it drives us beyond generational gaps and disruptions of the future. We are looking forward to hire Tableau Professionals in the following areas : Experience 7-9 Years Prepare required data model in tableau from the source files Build the required dashboard based on the wireframe designed. Expertise in Tableau dashboard development Expert in Tableau data model setup Strong experience Sql Ensure compliance with data governance and security policies. Work closely with business and dev teams to translate the business/functional requirements into technical specifications that drive Big Data solutions to meet functional requirements. Our Hyperlearning workplace is grounded upon four principles Flexible work arrangements, Free spirit, and emotional positivity Agile self-determination, trust, transparency, and open collaboration All Support needed for the realization of business goals, Stable employment with a great atmosphere and ethical corporate culture
Posted 2 weeks ago
5.0 - 8.0 years
7 - 10 Lacs
Hyderabad, Chennai, Bengaluru
Work from Office
Role Summary We are seeking a strategic and hands-on Senior Manager to lead our Fraud Strategy function. This person will own the development and performance of fraud models and scorecards, work closely with cross-functional partners to define fraud policies, and ensure effective collaboration with our fraud operations team. The ideal candidate combines strong analytical skills, a solid understanding of fraud typologies in consumer lending, and a pragmatic approach to implementation. Key Responsibilities Lead fraud strategy for the lending portfolio, balancing risk mitigation with customer experience and approval rates Develop and maintain fraud detection models and anomaly detection systems using internal and third-party data. Manage fraud scorecard performance and recommend model improvements or policy changes as needed. Collaborate with fraud operations to monitor real-time fraud trends and adapt policies dynamically. Partner with product, data engineering, and external vendors to evolve fraud detection infrastructure. Provide regular fraud performance reporting and deep-dives for senior leadership. Act as subject matter expert on fraud data, model outputs, and cross-channel vulnerabilities. Qualifications 5-8 years of experience in fraud strategy, analytics, or credit risk in financial services or fintech. Strong understanding of identity fraud, synthetic fraud, first-party fraud, and third-party fraud patterns. Hands-on experience with fraud models, machine learning tools, and scorecard management. Advanced proficiency in SQL and Python for data analysis and modeling. Experience working with third-party fraud data providers and integrating fraud rules or signals into decision engines. Ability to communicate insights and recommendations clearly to technical and non-technical stakeholders. Exposure to US consumer lending regulations and risk management practices preferred.
Posted 2 weeks ago
12.0 - 16.0 years
30 - 45 Lacs
Hyderabad, Pune, Bengaluru
Hybrid
Role & responsibilities Title: Data Architect Enterprise Data Management Experience: 12+ years No. of Positions: 1 Location: Delhi NCR, Bangalore, Pune (Hybrid) Job Summary: We are seeking a seasoned Data Architect with 12+ years of experience in designing scalable, metadata-driven data architectures. This individual will lead initiatives spanning enterprise metadata management, federated query design, real-time data integration, and semantic layer modelling. The ideal candidate will be hands-on, able to translate complex requirements into robust solutions, and collaborate closely with business, technology, and governance stakeholders. Strong communication and documentation skills are essential, as this role operates at the intersection of data strategy, engineering, and enterprise governance. Must Have Skills: • 12+ years of experience in data architecture and engineering, with deep expertise in metadata-driven, federated, and real-time data environments • Core Competencies o Enterprise Metadata Management: Design and implementation of automated data discovery, cataloguing, and lineage tracking across heterogeneous data platforms o Federated Query Architecture: Building unified data access layers that abstract complexity across multiple data sources and query engines o Real-time Data Integration: Event-driven architectures for continuous metadata synchronization and schema evolution management o Data Governance Frameworks: Establishing automated data quality, privacy compliance, and access control patterns at enterprise scale o Semantic Layer Design: Creating business-friendly data models that bridge technical schemas with analytical requirements • Technical Proficiencies o Programming: Python (data engineering libraries), SQL (advanced optimization), Scala/Java o Data Modelling: Dimensional modelling, graph databases, semantic web technologies o Search & Discovery: Full-text search engines, vector similarity search, ML-based data classification o API Architecture: REST, GraphQL, and gRPC for data service exposure o Streaming Platforms: Message queuing and event streaming architectures • Track record of effective collaboration with Data Engineers, Governance Leads, BI/Analytics Developers, ML Engineers, and Product Owners on complex data initiatives • Demonstrated ability to produce data architecture diagrams, lineage flows, and maintain high-quality documentation standards • Excellent written and verbal communication skills, with the ability to interact with executive sponsors, technology teams, and governance stakeholders • Self-driven, hands-on architect with a lead-by-doing mindset for solution validation, issue resolution, and cross-team enablement Nice to Have Skills: • Implementation experience with enterprise knowledge graphs • Understanding of data mesh and data fabric architectural approaches • Experience with MLOps environments and integration of feature stores • Execution of multi-cloud data strategies (e.g., AWS, Azure, GCP) • Exposure to vector search, ML-based classification, and automated data discovery • Familiarity with full-text search engines and search-driven metadata environments Role & Responsibilities: • Architect, implement, and evolve the enterprise metadata and data architecture to enable discovery, quality, and governance at scale • Lead the design of a federated query layer that abstracts data access across distributed platforms and technologies • Define and implement semantic layers for business-friendly data modeling and reporting enablement • Develop and enforce data governance rules via architectural automation and controls • Collaborate across functions with Data Engineers, Analytics Teams, Governance Stakeholders, UI/UX Designers, and ML Engineering teams to ensure architectural alignment and delivery • Enable real-time metadata synchronization, schema tracking, and classification pipelines • Create and maintain data architecture documentation, lineage maps, and solution artifacts • Support strategic initiatives involving data mesh, knowledge graph, MLOps, and cloud-native data ecosystems • Drive solutioning, reviews, and standards as a technical advisor and hands-on architect • Communicate architectural vision, design decisions, and roadmap updates with executives, product owners, and technical teams Key Skills: Metadata management, Data architecture, Semantic modeling, Federated queries, Real-time integration, Data governance, Python, SQL, Scala, Graph modeling, GraphQL, Kafka, API design, Data mesh, Knowledge graph Education: • Bachelor’s or Master’s degree in Computer Science, Information Systems, or related field • Certifications in Cloud Architecture, Metadata Management, or Data Modeling (e.g., DAMA, DCAM, TOGAF) preferred Preferred candidate profile
Posted 2 weeks ago
1.0 - 5.0 years
3 - 7 Lacs
Gurugram
Work from Office
Coordintation with Engineering, Service and SMC for oils and paints, field sales (Parts, Accessories Team) Coordination with oil and paint vendors, AIS 140 for regular activities for increasing sales, understading best practices etc Preperation and Management of Sr.Management Decks Handling Suzuki Connect complaints and resolution Developing Strategies to generate additional revenue through Data analytics & Business Insights Coordination with field team for query handling and resolution. Sales Forecasting and Identifying sales trends to improve process efficiency Managing periodic Audits being conducted and annual budget coordination. Strong knowledge of Channel Management - Dealers & Distribution Proficiency in MS Excel and Data modelling Knowledge of Power BI is preferred Data Analysis and Data Visualization with ability to handle large data sets Strong Interpersonal Skills & collaborative approach Key Account Mangement Skills also preferred
Posted 2 weeks ago
4.0 - 8.0 years
6 - 10 Lacs
Bengaluru
Work from Office
Job Overview: We are seeking a skilled DataOps Engineer with a strong foundation in DevOps practices and Data Engineering principles. The ideal candidate will be responsible for ensuring smooth deployment, observability, and performance optimization of data pipelines and platforms. You will work at the intersection of software engineering, DevOps, and data engineering bridging gaps between development, operations, and data teams. Key Responsibilities: Design, implement, and manage CI/CD pipelines using tools such as Jenkins, Git, and Terraform. Manage and maintain Kubernetes (K8s) clusters for scalable and resilient data infrastructure. Develop and maintain observability tools and dashboards (e.g., Prometheus, Grafana, ELK stack) for monitoring pipeline and platform health. Automate infrastructure provisioning and deployments using Infrastructure as Code (IaC) tools, preferably Terraform. Collaborate with data engineers to debug, optimize, and track performance of data pipelines (e.g., Airflow, Airbyte, etc.). Implement and monitor data quality, lineage, and orchestration workflows. Develop custom scripts and tools in Python to enhance pipeline reliability and automation. Work closely with data teams to manage and optimize Snowflake environments, focusing on performance tuning and cost efficiency. Ensure compliance with security, scalability, and operational best practices across the data platform. Act as a liaison between development and operations to maintain SLAs for data availability and reliability. Required Skills & Experience: 4-8 years of experience in DevOps / DataOps / Platform Engineering roles. Proficient in managing Kubernetes clusters and associated tooling (Helm, Kustomize, etc.). Hands-on experience with CI/CD pipelines, especially using Jenkins, GitOps, and automated testing frameworks. Strong scripting and automation skills in Python. Experience with workflow orchestration tools like Apache Airflow and data ingestion tools like Airbyte. Solid experience with Infrastructure as Code tools, preferably Terraform. Familiarity with observability and monitoring tools such as Prometheus, Grafana, Datadog, or New Relic. Working knowledge of data platforms, particularly Snowflake, including query performance tuning and monitoring. Strong debugging and problem-solving skills, especially in production data pipeline scenarios. Excellent communication skills and ability to collaborate across engineering, operations, and analytics teams. Preferred Qualifications: Experience with cloud platforms (AWS, and/or GCP) and cloud-native DevOps practices. Familiarity with data cataloging and lineage tools. Exposure to container security, policy management, and data governance tools. Background in data modeling, SQL optimization, or data warehousing concepts is a plus.
Posted 2 weeks ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
40005 Jobs | Dublin
Wipro
19416 Jobs | Bengaluru
Accenture in India
16187 Jobs | Dublin 2
EY
15356 Jobs | London
Uplers
11435 Jobs | Ahmedabad
Amazon
10613 Jobs | Seattle,WA
Oracle
9462 Jobs | Redwood City
IBM
9313 Jobs | Armonk
Accenture services Pvt Ltd
8087 Jobs |
Capgemini
7830 Jobs | Paris,France