Jobs
Interviews

2379 Snowflake Jobs - Page 16

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

0.0 years

19 - 22 Lacs

Hyderabad

Work from Office

Overview This person will look after the environment management of the salesforce orgs and will also be able to handle the deployments within multiple salesforce orgs. Responsibilities Oversee Salesforce Data Cloud environments across development, staging, and production. Define best practices for environment setup, security, and governance. Manage data pipelines, ingestion processes, and harmonization rules for efficient data flow. Establish role-based access control (RBAC) to ensure data security and compliance. Monitor data processing jobs, ingestion performance, and data harmonization. Ensure compliance with GDPR, CCPA, and other data privacy regulations Establish CI/CD pipelines using tools like Azure DevOps Implement version control and automated deployment strategies for Data Cloud configurations Define a data refresh strategy for lower environments to maintain consistency. Qualifications Mandatory Technical Skills Extensive experience in setting up, maintaining, and troubleshooting CI/CD pipelines for Salesforce apps. Strong knowledge of Azure DevOps tools and pipeline creation, with proficiency in automation scripting (primarily YAML, with additional languages as needed). Hands-on experience with SFDX, Azure Repos, and automated release deployments for Salesforce. Expertise in implementing GIT branching strategies using VS Code integrated with Salesforce CLI tool. Mandatory Skills Proficiency in Salesforce Data Cloud architecture and best practices. Experience with data lake, Snowflake, or cloud-based data storage solutions. Familiarity with OAuth, authentication mechanisms, and data security standards. Salesforce Data Cloud Consultant Certification

Posted 1 week ago

Apply

10.0 - 15.0 years

12 - 16 Lacs

Hyderabad

Work from Office

Overview As a member of the Platform engineering team, you will be the key techno functional expert leading and overseeing PepsiCo's Platforms & operations and drive a strong vision for how Platforms engineering can proactively create a positive impact on the business. You'll be an empowered Leader of a team of Platform engineers who build Platform products for platform optimization and cost optimization and build tools for Platform ops and Data Ops on the PepsiCo Data Lake, and enable exploration and access for analytics, visualization, machine learning, and product development efforts across the company. As Leader of the Platform engineering team, you will help in managing platform Governance team that builds frameworks to guardrail the platforms of very large and complex data applications in public cloud environments and directly impact the design, architecture, and implementation of PepsiCo's flagship data products around topics like revenue management, supply chain, manufacturing, and logistics. You will work closely with process owners, product owners and business users. You'll be working in a hybrid environment with in-house, on-premises data sources as well as cloud and remote systems. Responsibilities Active contributor to cost optimization of platforms and services. Manage and scale Azure Data Platforms to support new product launches and drive Platform Stability and Observability across data products. Build and own the automation and monitoring frameworks that captures metrics and operational KPIs for Data Platforms for cost and performance. Responsible for implementing best practices around systems integration, security, performance and Platform management. Empower the business by creating value through the increased adoption of data, data science and business intelligence landscape. Collaborate with internal clients (data science and product teams) to drive solutioning and POC discussions. Evolve the architectural capabilities and maturity of the data platform by engaging with enterprise architects and strategic internal and external partners. Develop and optimize procedures to production Alize data science models. Define and manage SLAs for Platforms and processes running in production. Support large-scale experimentation done by data scientists. Prototype new approaches and build solutions at scale. Research in state-of-the-art methodologies. Create documentation for learnings and knowledge transfer. Create and audit reusable packages or libraries. Qualifications 10+ years of overall technology experience that includes at least 4+ years of hands-on software development, Program management, and Advanced Analytics. 4+ years of experience with Power BI, Tableau, Data Warehousing, and Data Analytics tools. 4+ years of experience in Platform optimization and performance tuning Experience in managing multiple teams and coordinating with different stakeholders to implement the vision of the team. Fluent with Azure cloud services. Azure Certification is a plus. Experience with integration of multi cloud services with on-premises technologies. Experience with data modeling, data warehousing, and building Symantec Models. Proficient in DAX queries, Copilot and AI Skills Experience building/operating highly available, distributed systems of data Visualization . Experience with at least one MPP database technology such as Redshift, Synapse or Snowflake. Experience with version control systems like Github and deployment & CI tools. Knowledge of Azure Data Factory, Azure Databricks. Experience with Statistical/ML techniques is a plus. Experience with building solutions in the retail or in the supply chain space is a plus Understanding of metadata management, data lineage, and data glossaries is a plus. Working knowledge of agile development, including DevOps and DataOps concepts. Familiarity with Augmented Analytics tools is Plus (such as ThoughtSpot, Tellius).

Posted 1 week ago

Apply

4.0 - 9.0 years

9 - 13 Lacs

Hyderabad

Work from Office

Overview As an Analyst, Data Modeling, your focus would be to partner with D&A Data Foundation team members to create data models for Global projects. This would include independently analyzing project data needs, identifying data storage and integration needs/issues, and driving opportunities for data model reuse, satisfying project requirements. Role will advocate Enterprise Architecture, Data Design, and D&A standards, and best practices. You will be performing all aspects of Data Modeling working closely with Data Governance, Data Engineering and Data Architects teams. As a member of the data modeling team, you will create data models for very large and complex data applications in public cloud environments directly impacting the design, architecture, and implementation of PepsiCo's flagship data products around topics like revenue management, supply chain, manufacturing, and logistics. The primary responsibilities of this role are to work with data product owners, data management owners, and data engineering teams to create physical and logical data models with an extensible philosophy to support future, unknown use cases with minimal rework. You'll be working in a hybrid environment with in-house, on-premise data sources as well as cloud and remote systems. You will establish data design patterns that will drive flexible, scalable, and efficient data models to maximize value and reuse. Responsibilities Complete conceptual, logical and physical data models for any supported platform, including SQL Data Warehouse, EMR, Spark, DataBricks, Snowflake, Azure Synapse or other Cloud data warehousing technologies. Governs data design/modeling documentation of metadata (business definitions of entities and attributes) and constructions database objects, for baseline and investment funded projects, as assigned. Provides and/or supports data analysis, requirements gathering, solution development, and design reviews for enhancements to, or new, applications/reporting. Supports assigned project contractors (both on- & off-shore), orienting new contractors to standards, best practices, and tools. Contributes to project cost estimates, working with senior members of team to evaluate the size and complexity of the changes or new development. Ensure physical and logical data models are designed with an extensible philosophy to support future, unknown use cases with minimal rework. Develop a deep understanding of the business domain and enterprise technology inventory to craft a solution roadmap that achieves business objectives, maximizes reuse. Partner with IT, data engineering and other teams to ensure the enterprise data model incorporates key dimensions needed for the proper managementbusiness and financial policies, security, local-market regulatory rules, consumer privacy by design principles (PII management) and all linked across fundamental identity foundations. Drive collaborative reviews of design, code, data, security features implementation performed by data engineers to drive data product development. Assist with data planning, sourcing, collection, profiling, and transformation. Create Source To Target Mappings for ETL and BI developers. Show expertise for data at all levelslow-latency, relational, and unstructured data stores; analytical and data lakes; data streaming (consumption/production), data in-transit. Develop reusable data models based on cloud-centric, code-first approaches to data management and cleansing. Partner with the Data Governance team to standardize their classification of unstructured data into standard structures for data discovery and action by business customers and stakeholders. Support data lineage and mapping of source system data to canonical data stores for research, analysis and productization. Qualifications 4+ years of overall technology experience that includes at least 2+ years of data modeling and systems architecture. Around 2+ years of experience with Data Lake Infrastructure, Data Warehousing, and Data Analytics tools. 2+ years of experience developing enterprise data models. Experience in building solutions in the retail or in the supply chain space. Expertise in data modeling tools (ER/Studio, Erwin, IDM/ARDM models). Experience with integration of multi cloud services (Azure) with on-premises technologies. Experience with data profiling and data quality tools like Apache Griffin, Deequ, and Great Expectations. Experience building/operating highly available, distributed systems of data extraction, ingestion, and processing of large data sets. Experience with at least one MPP database technology such as Redshift, Synapse, Teradata or SnowFlake. Experience with version control systems like Github and deployment & CI tools. Experience with Azure Data Factory, Databricks and Azure Machine learning is a plus. Experience of metadata management, data lineage, and data glossaries is a plus. Working knowledge of agile development, including DevOps and DataOps concepts. Familiarity with business intelligence tools (such as PowerBI).

Posted 1 week ago

Apply

8.0 - 13.0 years

18 - 22 Lacs

Hyderabad

Work from Office

Overview Enterprise Data Operations Sr Analyst L08 Job OverviewAs Senior Analyst, Data Modeling , your focus would be to partner with D&A Data Foundation team members to create data models for Global projects. This would include independently analyzing project data needs, identifying data storage and integration needs/issues, and driving opportunities for data model reuse, satisfying project requirements. Role will advocate Enterprise Architecture, Data Design, and D&A standards, and best practices. You will be performing all aspects of Data Modeling working closely with Data Governance, Data Engineering and Data Architects teams. As a member of the data modeling team, you will create data models for very large and complex data applications in public cloud environments directly impacting the design, architecture, and implementation of PepsiCo's flagship data products around topics like revenue management, supply chain, manufacturing, and logistics . The primary responsibilities of this role are to work with data product owners, data management owners, and data engineering teams to create physical and logical data models with an extensible philosophy to support future, unknown use cases with minimal rework. You'll be working in a hybrid environment with in-house, on-premise data sources as well as cloud and remote systems. You will establish data design patterns that will drive flexible, scalable, and efficient data models to maximize value and reuse. Responsibilities Responsibilities Complete conceptual, logical and physical data models for any supported platform, including SQL Data Warehouse, EMR, Spark, DataBricks , Snowflake, Azure Synapse or other Cloud data warehousing technologies. Governs data design/ modeling documentation of metadata (business definitions of entities and attributes) and constructions database objects, for baseline and investment funded projects, as assigned. Provides and/or supports data analysis, requirements gathering, solution development, and design reviews for enhancements to, or new, applications/reporting. Supports assigned project contractors (both on- & off-shore), orienting new contractors to standards, best practices, and tools. Contributes to project cost estimates, working with senior members of team to evaluate the size and complexity of the changes or new development. Ensure physical and logical data models are designed with an extensible philosophy to support future, unknown use cases with minimal rework. Develop a deep understanding of the business domain and enterprise technology inventory to craft a solution roadmap that achieves business objectives , maximizes reuse. Partner with IT, data engineering and other teams to ensure the enterprise data model incorporates key dimensions needed for the proper managementbusiness and financial policies, security, local-market regulatory rules, consumer privacy by design principles (PII management) and all linked across fundamental identity foundations. Drive collaborative reviews of design, code, data, security features implementation performed by data engineers to drive data product development. Assist with data planning, sourcing, collection, profiling, and transformation. Create Source To Target Mappings for ETL and BI developers. Show expertise for data at all levelslow-latency, relational, and unstructured data stores; analytical and data lakes; data streaming (consumption/production), data in-transit. Develop reusable data models based on cloud-centric, code-first approaches to data management and cleansing. Partner with the Data Governance team to standardize their classification of unstructured data into standard structures for data discovery and action by business customers and stakeholders. Support data lineage and mapping of source system data to canonical data stores for research, analysis and productization. Qualifications Qualifications 8+ years of overall technology experience that includes at least 4+ years of data modeling and systems architecture. 3+ years of experience with Data Lake Infrastructure, Data Warehousing, and Data Analytics tools. 4+ years of experience developing enterprise data models. Experience in building solutions in the retail or in the supply chain space. Expertise in data modeling tools (ER/Studio, Erwin, IDM/ARDM models). Experience with integration of multi cloud services (Azure) with on-premises technologies. Experience with data profiling and data quality tools like Apache Griffin, Deequ , and Great Expectations. Experience building/ operating highly available , distributed systems of data extraction, ingestion, and processing of large data sets. Experience with at least one MPP database technology such as Redshift, Synapse, Teradata or SnowFlake . Experience with version control systems like Github and deployment & CI tools. Experience with Azure Data Factory, Databricks and Azure Machine learning is a plus. Experience of metadata management, data lineage, and data glossaries is a plus. Working knowledge of agile development, including DevOps and DataOps concepts. Familiarity with business intelligence tools (such as PowerBI ). Does the person hired for this job need to be based in a PepsiCo office, or can they be remote Employee must be based in a Pepsico office Primary Work LocationHyderabad HUB-IND

Posted 1 week ago

Apply

5.0 - 10.0 years

14 - 19 Lacs

Hyderabad

Work from Office

Overview As an Analyst, Data Modeling, your focus would be to partner with D&A Data Foundation team members to create data models for Global projects. This would include independently analyzing project data needs, identifying data storage and integration needs/issues, and driving opportunities for data model reuse, satisfying project requirements. Role will advocate Enterprise Architecture, Data Design, and D&A standards, and best practices. You will be performing all aspects of Data Modeling working closely with Data Governance, Data Engineering and Data Architects teams. As a member of the data modeling team, you will create data models for very large and complex data applications in public cloud environments directly impacting the design, architecture, and implementation of PepsiCo's flagship data products around topics like revenue management, supply chain, manufacturing, and logistics. The primary responsibilities of this role are to work with data product owners, data management owners, and data engineering teams to create physical and logical data models with an extensible philosophy to support future, unknown use cases with minimal rework. You'll be working in a hybrid environment with in-house, on-premise data sources as well as cloud and remote systems. You will establish data design patterns that will drive flexible, scalable, and efficient data models to maximize value and reuse. Responsibilities Complete conceptual, logical and physical data models for any supported platform, including SQL Data Warehouse, EMR, Spark, DataBricks, Snowflake, Azure Synapse or other Cloud data warehousing technologies. Governs data design/modeling documentation of metadata (business definitions of entities and attributes) and constructions database objects, for baseline and investment funded projects, as assigned. Provides and/or supports data analysis, requirements gathering, solution development, and design reviews for enhancements to, or new, applications/reporting. Supports assigned project contractors (both on- & off-shore), orienting new contractors to standards, best practices, and tools. Contributes to project cost estimates, working with senior members of team to evaluate the size and complexity of the changes or new development. Ensure physical and logical data models are designed with an extensible philosophy to support future, unknown use cases with minimal rework. Develop a deep understanding of the business domain and enterprise technology inventory to craft a solution roadmap that achieves business objectives, maximizes reuse. Partner with IT, data engineering and other teams to ensure the enterprise data model incorporates key dimensions needed for the proper managementbusiness and financial policies, security, local-market regulatory rules, consumer privacy by design principles (PII management) and all linked across fundamental identity foundations. Drive collaborative reviews of design, code, data, security features implementation performed by data engineers to drive data product development. Assist with data planning, sourcing, collection, profiling, and transformation. Create Source To Target Mappings for ETL and BI developers. Show expertise for data at all levelslow-latency, relational, and unstructured data stores; analytical and data lakes; data streaming (consumption/production), data in-transit. Develop reusable data models based on cloud-centric, code-first approaches to data management and cleansing. Partner with the Data Governance team to standardize their classification of unstructured data into standard structures for data discovery and action by business customers and stakeholders. Support data lineage and mapping of source system data to canonical data stores for research, analysis and productization. Qualifications 5+ years of overall technology experience that includes at least 2+ years of data modeling and systems architecture. Around 2+ years of experience with Data Lake Infrastructure, Data Warehousing, and Data Analytics tools. 2+ years of experience developing enterprise data models. Experience in building solutions in the retail or in the supply chain space. Expertise in data modeling tools (ER/Studio, Erwin, IDM/ARDM models). Experience with integration of multi cloud services (Azure) with on-premises technologies. Experience with data profiling and data quality tools like Apache Griffin, Deequ, and Great Expectations. Experience building/operating highly available, distributed systems of data extraction, ingestion, and processing of large data sets. Experience with at least one MPP database technology such as Redshift, Synapse, Teradata or SnowFlake. Experience with version control systems like Github and deployment & CI tools. Experience with Azure Data Factory, Databricks and Azure Machine learning is a plus. Experience of metadata management, data lineage, and data glossaries is a plus. Working knowledge of agile development, including DevOps and DataOps concepts. Familiarity with business intelligence tools (such as PowerBI).

Posted 1 week ago

Apply

8.0 - 13.0 years

8 - 12 Lacs

Hyderabad

Work from Office

Overview As Senior Analyst, Data Modeling, your focus would be to partner with D&A Data Foundation team members to create data models for Global projects. This would include independently analyzing project data needs, identifying data storage and integration needs/issues, and driving opportunities for data model reuse, satisfying project requirements. Role will advocate Enterprise Architecture, Data Design, and D&A standards, and best practices. You will be performing all aspects of Data Modeling working closely with Data Governance, Data Engineering and Data Architects teams. As a member of the data modeling team, you will create data models for very large and complex data applications in public cloud environments directly impacting the design, architecture, and implementation of PepsiCo's flagship data products around topics like revenue management, supply chain, manufacturing, and logistics. The primary responsibilities of this role are to work with data product owners, data management owners, and data engineering teams to create physical and logical data models with an extensible philosophy to support future, unknown use cases with minimal rework. You'll be working in a hybrid environment with in-house, on-premise data sources as well as cloud and remote systems. You will establish data design patterns that will drive flexible, scalable, and efficient data models to maximize value and reuse. Responsibilities Complete conceptual, logical and physical data models for any supported platform, including SQL Data Warehouse, EMR, Spark, DataBricks, Snowflake, Azure Synapse or other Cloud data warehousing technologies. Governs data design/modeling documentation of metadata (business definitions of entities and attributes) and constructions database objects, for baseline and investment funded projects, as assigned. Provides and/or supports data analysis, requirements gathering, solution development, and design reviews for enhancements to, or new, applications/reporting. Supports assigned project contractors (both on- & off-shore), orienting new contractors to standards, best practices, and tools. Contributes to project cost estimates, working with senior members of team to evaluate the size and complexity of the changes or new development. Ensure physical and logical data models are designed with an extensible philosophy to support future, unknown use cases with minimal rework. Develop a deep understanding of the business domain and enterprise technology inventory to craft a solution roadmap that achieves business objectives, maximizes reuse. Partner with IT, data engineering and other teams to ensure the enterprise data model incorporates key dimensions needed for the proper managementbusiness and financial policies, security, local-market regulatory rules, consumer privacy by design principles (PII management) and all linked across fundamental identity foundations. Drive collaborative reviews of design, code, data, security features implementation performed by data engineers to drive data product development. Assist with data planning, sourcing, collection, profiling, and transformation. Create Source To Target Mappings for ETL and BI developers. Show expertise for data at all levelslow-latency, relational, and unstructured data stores; analytical and data lakes; data str/cleansing. Partner with the Data Governance team to standardize their classification of unstructured data into standard structures for data discovery and action by business customers and stakeholders. Support data lineage and mapping of source system data to canonical data stores for research, analysis and productization. Qualifications 8+ years of overall technology experience that includes at least 4+ years of data modeling and systems architecture. 3+ years of experience with Data Lake Infrastructure, Data Warehousing, and Data Analytics tools. 4+ years of experience developing enterprise data models. Experience in building solutions in the retail or in the supply chain space. Expertise in data modeling tools (ER/Studio, Erwin, IDM/ARDM models). Experience with integration of multi cloud services (Azure) with on-premises technologies. Experience with data profiling and data quality tools like Apache Griffin, Deequ, and Great Expectations. Experience building/operating highly available, distributed systems of data extraction, ingestion, and processing of large data sets. Experience with at least one MPP database technology such as Redshift, Synapse, Teradata or SnowFlake. Experience with version control systems like Github and deployment & CI tools. Experience with Azure Data Factory, Databricks and Azure Machine learning is a plus. Experience of metadata management, data lineage, and data glossaries is a plus. Working knowledge of agile development, including DevOps and DataOps concepts. Familiarity with business intelligence tools (such as PowerBI).

Posted 1 week ago

Apply

7.0 - 12.0 years

9 - 14 Lacs

Bengaluru

Work from Office

Bachelors degree or military experience in related field preferably computer science and 7 years of experience in ETL development within a Data Warehouse Deep understanding of enterprise data warehousing best practices and standards Strong experience in software engineering comprising of designing, developing and operating robust and highly scalable cloud infrastructure services Strong experience with Python/PySpark, DataStage ETL and SQL development Proven experience in cloud infrastructure projects with hands on migration expertise on public clouds such as AWS and Azure, preferably Snowflake Knowledge of Cybersecurity organization practices, operations, risk management processes, principles, architectural requirements, engineering and threats and vulnerabilities, including incident response methodologies Understand Authentication & Authorization Services, Identity & Access Management Strong communication and interpersonal skills

Posted 1 week ago

Apply

10.0 - 12.0 years

9 - 13 Lacs

Chennai

Work from Office

Job Title Data ArchitectExperience 10-12 YearsLocation Chennai : 10-12 years experience as Data Architect Strong expertise in streaming data technologies like Apache Kafka, Flink, Spark Streaming, or Kinesis. ProficiencyinprogramminglanguagessuchasPython,Java,Scala,orGo ExperiencewithbigdatatoolslikeHadoop,Hive,anddatawarehousessuchas Snowflake,Redshift,Databricks,MicrosoftFabric. Proficiencyindatabasetechnologies(SQL,NoSQL,PostgreSQL,MongoDB,DynamoDB,YugabyteDB). Should be flexible to work as anIndividual contributor

Posted 1 week ago

Apply

10.0 - 15.0 years

4 - 8 Lacs

Noida

Work from Office

Highly skilled and experienced Data Modeler to join Enterprise Data Modelling team. The candidate will be responsible for creating and maintaining conceptual logical and physical data models ensuring alignment with industry best practices and standards. Working closely with business and functional teams the Data Modeler will play a pivotal role in standardizing data models at portfolio and domain levels driving efficiencies and maximizing the value of clients data assets. Preference will be given to candidates with prior experience within an Enterprise Data Modeling team. The ideal domain experience would be Insurance or Investment Banking. Roles and Responsibilities: Develop comprehensive conceptual logical and physical data models for multiple domains within the organization leveraging industry best practices and standards. Collaborate with business and functional teams to understand their data requirements and translate them into effective data models that support their strategic objectives. Serve as a subject matter expert in data modeling tools such as ERwin Data Modeler providing guidance and support to other team members and stakeholders. Establish and maintain standardized data models across portfolios and domains ensuring consistency governance and alignment with organizational objectives. Identify opportunities to optimize existing data models and enhance data utilization particularly in critical areas such as fraud banking AML. Provide consulting services to internal groups on data modeling tool usage administration and issue resolution promoting seamless data flow and application connections. Develop and deliver training content and support materials for data models ensuring that stakeholders have the necessary resources to understand and utilize them effectively. Collaborate with the enterprise data modeling group to develop and implement a robust governance framework and metrics for model standardization with a focus on longterm automated monitoring solutions. Qualifications: Bachelors or masters degree in computer science Information Systems or a related field. 10 years of experience working as a Data Modeler or in a similar role preferably within a large enterprise environment. Expertise in data modeling concepts and methodologies with demonstrated proficiency in creating conceptual logical and physical data models. Handson experience with data modeling tools such as Erwin Data Modeler as well as proficiency in database environments such as Snowflake and Netezza. Strong analytical and problemsolving skills with the ability to understand complex data requirements and translate them into effective data models. Excellent communication and collaboration skills with the ability to work effectively with crossfunctional teams and stakeholders. problem-solving skills,business intelligence platforms,erwin,data modeling,database management systems,data warehousing,etl processes,big data technologies,agile methodologies,data governance,sql,enterprise data modelling,data visualization tools,cloud data services,analytical skills,data modelling tool,,data architecture,communication skills

Posted 1 week ago

Apply

0.0 - 5.0 years

1 - 5 Lacs

Bengaluru

Work from Office

Job Title:Data Engineer - DBT (Data Build Tool)Experience0-5 YearsLocation:Bengaluru : Job Responsibilities Assist in the design and implementation of Snowflake-based analytics solution(data lake and data warehouse) on AWSRequirements definition, source data analysis and profiling, the logical and physical design of the data lake and datawarehouse as well as the design of data integration and publication pipelines Develop Snowflake deployment and usage best practices Help educate the rest of the team members on the capabilities and limitations of Snowflake Build and maintain data pipelines adhering to suggested enterprise architecture principles and guidelines Design, build, test, and maintain data management systemsWork in sync with internal and external team members like data architects, data scientists, data analysts to handle all sorts of technical issue Act as technical leader within the team Working in Agile/Lean model Deliver quality deliverables on time Translating complex functional requirements into technical solutions. EXPERTISE AND QUALIFICATIONSEssential Skills, Education and Experience Should have a B.E. / B.Tech. / MCA or equivalent degree along with 4-7 years of experience in Data Engineering Strong experience in DBT concepts like Model building and configurations, incremental load strategies, macro, DBT tests. Strong experience in SQL Strong Experience in AWS Creation and maintenance of optimum data pipeline architecture for ingestion, processing of data Creation of necessary infrastructure for ETL jobs from a wide range of data sources using Talend, DBT, S3, Snowflake. Experience in Data storage technologies like Amazon S3, SQL, NoSQL Data modeling technical awareness Experience in working with stakeholders working in different time zones Good to have AWS data services development experience. Working knowledge on using Bigdata technologies. Experience in collaborating data quality and data governance team. Exposure to reporting tools like Tableau Apache Airflow, Apache Kafka (nice to have) Payments domain knowledge CRM, Accounting, etc. in depth understanding Regulatory reporting exposureOther skills Good Communication skills Team Player Problem solver Willing to learn new technologies, share your ideas and assist other team members as neededStrong analytical and problem-solving skills; ability to define problems, collect data, establish facts, and drawconclusions.

Posted 1 week ago

Apply

6.0 - 8.0 years

30 - 35 Lacs

Bengaluru

Work from Office

Role Description: As a Data Engineering Lead, you will play a crucial role in overseeing the design, development, and maintenance of our organization's data architecture and infrastructure. You will be responsible for designing and developing the architecture for the data platform that ensures the efficient and effective processing of large volumes of data, enabling the business to make informed decisions based on reliable and high-quality data. The ideal candidate will have a strong background in data engineering, excellent leadership skills, and a proven track record of successfully managing complex data projects. Responsibilities : Data Architecture and Design: Design and implement scalable and efficient data architectures to support the organization's data processing needs Work closely with cross-functional teams to understand data requirements and ensure that data solutions align with business objectives ETL Development: Oversee the development of robust ETL processes to extract, transform, and load data from various sources into the data warehouse Ensure data quality and integrity throughout the ETL process, implementing best practices for data cleansing and validation Big Data Technology - Stay abreast of emerging trends and technologies in big data and analytics, and assess their applicability to the organization's data strategy Implement and optimize big data technologies to process and analyze large datasets efficiently Cloud Integration: Collaborate with the IT infrastructure team to integrate data engineering solutions with cloud platforms, ensuring scalability, security, and performance. Performance Monitoring and Optimization: Implement monitoring tools and processes to track the performance of data pipelines and proactively address any issues Optimize data processing. Documentation: Maintain comprehensive documentation for data engineering processes, data models, and system architecture Ensure that team members follow documentation standards and best practices. Collaboration and Communication: Collaborate with data scientists, analysts, and other stakeholders to understand their data needs and deliver solutions that meet those requirements Communicate effectively with technical and non-technical stakeholders, providing updates on project status, challenges, and opportunities. Qualifications: Bachelor's or Master's degree in Computer Science, Information Technology, or a related field. 6-8 years of professional experience in data engineering In-depth knowledge of data modeling, ETL processes, and data warehousing. In-depth knowledge of building the data warehouse using Snowflake Should have experience in data ingestion, data lakes, data mesh and data governance Must have experience in Python programming Strong understanding of big data technologies and frameworks, such as Hadoop, Spark, and Kafka. Experience with cloud platforms, such as AWS, Azure, or Google Cloud. Familiarity with database systems like SQL, NoSQL, and data pipeline orchestration tools .Excellent problem-solving and analytical skills .Strong communication and interpersonal skills. Proven ability to work collaboratively in a fast-paced, dynamic environment.

Posted 1 week ago

Apply

4.0 - 9.0 years

4 - 9 Lacs

Chennai

Work from Office

Experience in Snowflake Lead/Senior Developer or similar role.Strong proficiency in SQL/python,Strong proficiency in Dbt data modelling.Solid understanding of data modelling principles & best practices.Experience cloud platforms (GCP preferred).

Posted 1 week ago

Apply

5.0 - 7.0 years

7 - 9 Lacs

Mumbai, Delhi / NCR, Bengaluru

Work from Office

Work Mode: Remote Contract Duration: 6 Months to 1 Year Location: Mumbai, Delhi / NCR, Bengaluru , Kolkata, Chennai, Hyderabad, Ahmedabad, Pune,Remote (Open to candidates across India) Job Overview: We are seeking a highly skilled Technical Data Analyst for a remote contract position (6 to 12 months) to help build a single source of truth for our high-volume direct-to-consumer accounting and financial data warehouse. You will work closely with Finance & Accounting teams and play a pivotal role in dashboard creation, data transformation, and migration from Snowflake to Databricks. Key Responsibilities: 1. Data Analysis & Reporting Develop month-end accounting and tax dashboards using SQL in Snowflake (Snowsight) Migrate and transition reports/dashboards to Databricks Gather, analyze, and transform business requirements from finance/accounting stakeholders into data products 2. Data Transformation & Aggregation Build transformation pipelines in Databricks to support balance sheet look-forward views Maintain data accuracy and consistency throughout the Snowflake Databricks migration Partner with Data Engineering to optimize pipeline performance 3. ERP & Data Integration Support integration of financial data with NetSuite ERP Validate transformed data to ensure correct ingestion and mapping into ERP systems 4. Ingestion & Data Ops Work with Fivetran for ingestion and resolve any pipeline or data accuracy issues Monitor data workflows and collaborate with engineering teams on troubleshooting Required Skills & Qualifications: 5+ years of experience as a Data Analyst (preferably in Finance/Accounting domain) Strong in SQL, with proven experience in Snowflake and Databricks Experience in building financial dashboards (month-end close, tax reporting, balance sheets) Understanding of financial/accounting data: GL, journal entries, balance sheet, income statements Familiarity with Fivetran or similar data ingestion tools Experience with data transformation in a cloud environment Strong communication and stakeholder management skills Nice to have: Experience working with NetSuite ERP Apply Now: Please share your updated resume with the following details: Full Name Total Experience Relevant Experience in SQL, Snowflake, Databricks Experience in Finance or Accounting domain Current Location Availability (Notice Period) Current and Expected Rate

Posted 1 week ago

Apply

4.0 - 6.0 years

5 - 13 Lacs

Pune

Hybrid

Job Description : This position is for a Cloud Data engineer with a background in Python, DBT, SQL and data warehousing for enterprise level systems. Major Responsibilities: Adhere to standard coding principles and standards. Build and optimize data pipelines for efficient data ingestion, transformation and loading from various sources while ensuring data quality and integrity. Design, develop, and deploy python scripts and ETL processes in ADF environment to process and analyze varying volumes of data. Experience of DWH, Data Integration, Cloud, Design and Data Modelling. Proficient in developing programs in Python and SQL Experience with Data warehouse Dimensional data modeling. Working with event based/streaming technologies to ingest and process data. Working with structured, semi structured and unstructured data. Optimize ETL jobs for performance and scalability to handle big data workloads. Monitor and troubleshoot ADF jobs, identify and resolve issues or bottlenecks. Implement best practices for data management, security, and governance within the Databricks environment. Experience designing and developing Enterprise Data Warehouse solutions. Proficient writing SQL queries and programming including stored procedures and reverse engineering existing process. Perform code reviews to ensure fit to requirements, optimal execution patterns and adherence to established standards. Checking in, checkout and peer review and merging PRs into git Repo. Knowledge of deployment of packages and code migrations to stage and prod environments via CI/CD pipelines. Skills: 3+ years Python coding experience. 5+ years - SQL Server based development of large datasets 5+ years with Experience with developing and deploying ETL pipelines using Databricks Pyspark. Experience in any cloud data warehouse like Synapse, ADF, Redshift, Snowflake. Experience in Data warehousing - OLTP, OLAP, Dimensions, Facts, and Data modeling. Previous experience leading an enterprise-wide Cloud Data Platform migration with strong architectural and design skills. Experience with Cloud based data architectures, messaging, and analytics. Cloud certification(s). Add ons: Any experience with Airflow , AWS lambda, AWS glue and Step functions is a Plus.

Posted 1 week ago

Apply

4.0 - 9.0 years

13 - 23 Lacs

Hyderabad, Pune, Bengaluru

Hybrid

Role & responsibilities Preferred candidate profile Experience: A minimum of 4-10 years of experience into data integration/orchestration services, service architecture and providing data driven solutions for client requirements Experience on Microsoft Azure cloud and Snowflake SQL, database query/performance tuning. Experience with Qlik Replicate and Compose tools(Change Data Capture) tools is considered a plus Strong Data warehousing Concepts, ETL tools such as Talend Cloud Data Integration tool is must Exposure to the financial domain knowledge is considered a plus. Cloud Managed Services such as source control code Github, MS Azure/Devops is considered a plus. Prior experience with State Street and Charles River Development ( CRD) considered a plus. Experience in tools such as Visio, PowerPoint, Excel. Exposure to Third party data providers such as Bloomberg, Reuters, MSCI and other Rating agencies is a plus. Strong SQL knowledge and debugging skills is a must. Responsibilities: As a Data Integration Developer/Sr Developer, be hands-on ETL/ELT data pipelines, Snowflake DWH, CI/CD deployment Pipelines and data-readiness(data quality) design, development, implementation and address code or data issues. Experience in designing and implementing modern data pipelines for a variety of data sets which includes internal/external data sources, complex relationships, various data formats and high-volume. Experience and understanding of ETL Job performance techniques, Exception handling, Query performance tuning/optimizations and data loads meeting the runtime/schedule time SLAs both batch and real-time data uses cases. Demonstrated ability to rationalize problems and use judgment and innovation to define clear and concise solutions. Demonstrate strong collaborative experience across regions (APAC,EMEA and NA) to come up with design standards, High level design solutions document, cross training and resource onboarding activities. Good understanding of SDLC process, Governance clearance, Peer Code reviews, Unit Test Results, Code deployments, Code Security Scanning, Confluence Jira/Kanban stories. Strong attention to detail during root cause analysis, SQL query debugging and defect issue resolution by working with multiple business/IT stakeholders.

Posted 1 week ago

Apply

6.0 - 10.0 years

16 - 25 Lacs

Hyderabad, Pune, Bengaluru

Hybrid

6-10 years of experience in ETL Testing, Snowflake, DWH Concepts. Strong SQL knowledge & debugging skills are a must. Experience on Azure and Snowflake Testing is plus Experience with Qlik Replicate and Compose tools (Change Data Capture) tools is considered a plus Strong Data warehousing Concepts, ETL tools like Talend Cloud Data Integration, Pentaho/Kettle tool Experience in JIRA, Xray defect management toolis good to have. Exposure to the financial domain knowledge is considered a plus Testing the data-readiness (data quality) address code or data issues Demonstrated ability to rationalize problems and use judgment and innovation to define clear and concise solutions Demonstrate strong collaborative experience across regions (APAC, EMEA and NA) to effectively and efficiently identify root cause of code/data issues and come up with a permanent solution Prior experience with State Street and Charles River Development (CRD) considered a plus Experience in tools such as PowerPoint, Excel, SQL Exposure to Third party data providers such as Bloomberg, Reuters, MSCI and other Rating agencies is a plus Key Attributes include: Team player with professional and positive approach Creative, innovative and able to think outside of the box Strong attention to detail during root cause analysis and defect issue resolution Self-motivated & self-sufficient Effective communicator both written and verbal Brings a high level of energy with enthusiasm to generate excitement and motivate the team Able to work under pressure with tight deadlines and/or multiple projects Experience in negotiation and conflict resolution Role & responsibilities Preferred candidate profile

Posted 1 week ago

Apply

3.0 - 5.0 years

6 - 10 Lacs

Kolkata

Hybrid

3+ yrs exp in designing, developing, and managing MSSQL databases, migration, replication, and backup. Proficiency in SQL and T-SQL programming Exp in data modeling, database design Exp with data integration and Snowflake, PostgreSQL processes

Posted 1 week ago

Apply

5.0 - 7.0 years

5 - 5 Lacs

Bengaluru

Work from Office

We are seeking a highly skilled and innovative AI Data Engineer to join our Development team. In this role, you will design, develop, and deploy AI systems that can generate content, reason autonomously, and act as intelligent agents in dynamic environments. Key Responsibilities - Design and implement generative AI models (e.g., LLMs, diffusion models) for text, image, audio, or multimodal content generation. - Develop agentic AI systems capable of autonomous decision-making, planning, and tool use in complex environments. - Integrate AI agents with APIs, databases, and external tools to enable real-world task execution. - Fine-tune foundation models for domain-specific applications using techniques like RLHF, prompt engineering, and retrieval-augmented generation (RAG). - Collaborate with cross-functional teams including product, design, and engineering to bring AI-powered features to production. - Conduct research and stay up to date with the latest advancements in generative and agentic AI. - Ensure ethical, safe, and responsible AI development practices. Required Qualifications - Bachelor's or Master's degree in Computer Science, AI, Machine Learning, or a related field. - 3+ years of experience in machine learning, with a focus on generative models or autonomous agents. - Proficiency in Python and ML frameworks such as PyTorch - Experience with LLMs (e.g., GPT, Claude, LLaMA, Cortex), transformers, and diffusion models. - Familiarity with agent frameworks (e.g., LangChain, AutoGPT, ReAct, OpenAgents). - Experience with AWS and Snowflake services - Prior Healthcare experience - Strong understanding of reinforcement learning, planning algorithms, and multi-agent systems. - Excellent problem-solving and communication skills. Required Skills Artificial Intelligence,Digital Platform,Machine Learning,Python

Posted 1 week ago

Apply

10.0 - 14.0 years

25 - 30 Lacs

Pune

Work from Office

We are seeking a highly experienced Principal Solution Architect to lead the design, development, and implementation of sophisticated cloud-based data solutions for our key clients. The ideal candidate will possess deep technical expertise across multiple cloud platforms (AWS, Azure, GCP), data architecture paradigms, and modern data technologies. You will be instrumental in shaping data strategies, driving innovation through areas like GenAI and LLMs, and ensuring the successful delivery of complex data projects across various industries. Key Responsibilities: Solution Design & Architecture: Lead the architecture and design of robust, scalable, and secure enterprise-grade data solutions, including data lakes, data warehouses, data mesh, and real-time data pipelines on AWS, Azure, and GCP. Client Engagement & Pre-Sales: Collaborate closely with clients to understand their business challenges, translate requirements into technical solutions, and present compelling data strategies. Support pre-sales activities, including proposal development and solution demonstrations Data Strategy & Modernization: Drive data and analytics modernization initiatives, leveraging cloud-native services, Big Data technologies, GenAI, and LLMs to deliver transformative business value Industry Expertise: Apply data architecture best practices across various industries (e.g., BFSI, Retail, Supply Chain, Manufacturing) Requireme ntsRequired Qualifications & Skills Experience: 10+ years of experience in IT, with a significant focus on data architecture, solution architecture, and data engineering. Proven experience in a principal-level or lead architect role Cloud Expertise: Deep, hands-on experience with major cloud platforms: Azure: (Microsoft Fabric, Data Lake, Power BI, Data Factory, Azure Purview ), good understanding of Azure Service Foundry, Agentic AI, copi lotGCP: (Big Query, Vertex.AI, Gemini) Data Science Leadership: Understanding and experience in integrating AI/ML capabilities, including GenAI and LLMs, into data solutions Leadership & Communication: Exceptional communication, presentation, and interpersonal skills. Proven ability to lead technical teams and manage client relationships Problem-Solving: Strong analytical and problem-solving abilities with a strategic minds Education: Bachelors or masters degree in computer science, Engineering, Information Technology, or a related field Preferred Qualifications Relevant certifications in AWS, Azure, GCP, Snowflake, or Databricks Experience with Agentic AI, hyper-intelligent automation

Posted 1 week ago

Apply

5.0 - 8.0 years

7 - 10 Lacs

Chennai, Bengaluru

Work from Office

Job Overview: We are seeking a highly skilled Technical Data Analyst for a remote contract position (6 to 12 months) to help build a single source of truth for our high-volume direct-to-consumer accounting and financial data warehouse. You will work closely with Finance & Accounting teams and play a pivotal role in dashboard creation, data transformation, and migration from Snowflake to Databricks. Key Responsibilities: 1. Data Analysis & Reporting Develop month-end accounting and tax dashboards using SQL in Snowflake (Snowsight) Migrate and transition reports/dashboards to Databricks Gather, analyze, and transform business requirements from finance/accounting stakeholders into data products 2. Data Transformation & Aggregation Build transformation pipelines in Databricks to support balance sheet look-forward views Maintain data accuracy and consistency throughout the Snowflake Databricks migration Partner with Data Engineering to optimize pipeline performance 3. ERP & Data Integration Support integration of financial data with NetSuite ERP Validate transformed data to ensure correct ingestion and mapping into ERP systems 4. Ingestion & Data Ops Work with Fivetran for ingestion and resolve any pipeline or data accuracy issues Monitor data workflows and collaborate with engineering teams on troubleshooting Required Skills & Qualifications: 5+ years of experience as a Data Analyst (preferably in Finance/Accounting domain) Strong in SQL, with proven experience in Snowflake and Databricks Experience in building financial dashboards (month-end close, tax reporting, balance sheets) Understanding of financial/accounting data: GL, journal entries, balance sheet, income statements Familiarity with Fivetran or similar data ingestion tools Experience with data transformation in a cloud environment Strong communication and stakeholder management skills Nice to have: Experience working with NetSuite ERP Location: Delhi NCR,Bangalore,Chennai,Pune,Kolkata,Ahmedabad,Mumbai,Hyderabad

Posted 1 week ago

Apply

3.0 - 8.0 years

5 - 10 Lacs

Pune

Work from Office

We are seeking a highly skilled Senior Marketing Data Analyst to join our team. This role will involve analyzing and interpreting marketing data to identify trends, patterns, and actionable insights. About the Role The successful candidate will be responsible for: Analyzing marketing data to support campaign targeting and optimize marketing efforts. C ollaborating with cross-functional teams to understand data needs and provide analytical solutions. Evaluating marketing campaign performance and providing recommendations for improvement. Ensuring data quality and accuracy through validation and cleanup. Conducting ad-hoc analysis to solve business problems and provide actionable insights. Requirements To be successful in this role, you will need: A minimum of 3 years of experience in data analysis. Proficiency in SQL, Snowflake, and Python on a daily basis. Ability to work with cross-functional teams and create relevant insights.

Posted 1 week ago

Apply

5.0 - 10.0 years

10 - 18 Lacs

Hyderabad, Pune

Hybrid

j ob DescriptionJob Description Very good knowledge of Snowflake data warehouse, data infrastructure, data platforms, ETL implementation, data modelling and design Ability to gather, view, and analyze data Apply statistical and data analysis techniques to identify patterns, trends, correlations, and anomalies in large datasets. Utilize advanced analytical tools and programming languages (e.g., Python, R, SQL) to conduct exploratory data analysis. Develop and implement data models to support predictive and prescriptive analytics. Connecting to data sources, importing data and transforming data for Business Intelligence. Strong hands-on experience in writing SQLs, basic and complex Good knowledge of Snowflake cloud data platform Create clear and informative visualizations (charts, graphs, dashboards) to present insights to non-technical stakeholders. Strong exposure to Visualization , transformation, data analysis and formatting skills. Develop interactive dashboards and reports using data visualization tools (e.g.Snowsight, Tableau, Power BI) to facilitate data-driven decision-making. Good to have knowledge of Finance and Accounting domain Familiarity with cloud ecosystems Should be able to back track and do deep analysis of issues and provide RCA. Good testing and documentation skills Should be able to communicate with fast-paced, dynamic, client-facing role where delivering solid work products to exceed high expectations is a measure of success. Ability to be creative and analytical in a problem-solving environment. Effective verbal and written communication skills. Adaptable to new environments, people, technologies, and processes Ability to manage ambiguity and solve undefined problems

Posted 1 week ago

Apply

7.0 - 12.0 years

10 - 18 Lacs

Hyderabad

Work from Office

Role & responsibilities Job Description: We are seeking a Technical Lead with strong expertise in Talend , SQL , Snowflake , and AWS to lead and deliver enterprise-grade data integration and transformation solutions. The ideal candidate will have a proven track record in data engineering, leading technical teams, and implementing scalable cloud-based data platforms. Key Responsibilities: Lead end-to-end data integration and ETL/ELT solution design using Talend . Architect and implement scalable data pipelines and workflows on AWS . Oversee development and deployment of data solutions on Snowflake cloud data warehouse. Guide a team of data engineers and testers in the delivery of high-quality data products. Collaborate with business stakeholders, data architects, and analysts to translate requirements into technical solutions. Optimize SQL queries for performance and accuracy across large datasets. Ensure adherence to data security, governance, and quality best practices. Conduct code reviews and enforce engineering standards across the team. Required Skills: Talend: Strong hands-on experience with Talend Data Integration or Talend Big Data platform. SQL: Advanced proficiency in writing and optimizing complex SQL queries. Snowflake: Experience with Snowflake data warehouse, schema design, and data loading techniques. AWS: Proficient in AWS services such as S3, Redshift, Lambda, Glue, EC2, etc. Strong understanding of data modeling , ETL frameworks , and data pipeline orchestration . Excellent communication, team leadership, and stakeholder management skills.

Posted 1 week ago

Apply

8.0 - 12.0 years

0 Lacs

chennai, tamil nadu

On-site

We are looking for a Full Stack Developer with at least 8 years of relevant experience to be a part of our dynamic team. The ideal candidate should have a strong background in both front-end and back-end development, focusing on building and managing scalable, high-performance applications. It is essential to have expertise in Front-end React Framework and Backend Python. Proficiency in front-end technologies like HTML, CSS, and strong back-end development skills are required, along with knowledge of SQL. The work location for this position is in Chennai/Bangalore, and the work timing is from 2:00 PM to 11:00 PM. The interview process consists of 3 levels, including a Glider test with a minimum cutoff of 70% and 2 rounds of Technical Interviews. To qualify for this role, you need to exhibit: - Strong communication and interpersonal skills - Ability to collaborate effectively with internal and external stakeholders - Innovative and analytical thinking - Capacity to manage workload under time constraints and shifting priorities - Adaptability and eagerness to learn new technologies and methodologies In terms of technical proficiency, the ideal candidate should have: - Expertise in Front-end React Framework and Backend Python - Proficiency in front-end technologies such as HTML, CSS, and strong back-end development skills - Proficient in GIT and CI/CD practices - Developing and maintaining web applications using modern frameworks and technologies - Assisting in maintaining code quality, organization, and automation - Experience with relational database management systems - Familiarity with cloud services, primarily Azure (AWS, Azure, or Google Cloud) Industry knowledge in the oil and gas sector, particularly in trading operations, is highly desirable. Understanding market data, trading systems, and financial instruments related to oil and gas would be an added advantage. Preferred qualifications include: - Certifications in relevant technologies or methodologies - Proven experience in building, operating, and supporting robust and performant databases and data pipelines - Experience with Databricks and Snowflake - Solid understanding of web performance optimization, security, and best practices - Experience supporting Power BI dashboards - An Individual Contributor role with outstanding communication skills,

Posted 1 week ago

Apply

2.0 - 6.0 years

0 Lacs

gwalior, madhya pradesh

On-site

As a Data Engineer at Synram Software Services Pvt. Ltd., a subsidiary of FG International GmbH, you will be an integral part of our team dedicated to providing innovative IT solutions in ERP systems, E-commerce platforms, Mobile Applications, and Digital Marketing. We are committed to delivering customized solutions that drive success across various industries. In this role, you will be responsible for designing, building, and maintaining scalable data pipelines and infrastructure. Working closely with data analysts, data scientists, and software engineers, you will facilitate data-driven decision-making throughout the organization. Your key responsibilities will include developing, testing, and maintaining data architectures, designing and implementing ETL processes, optimizing data systems, collaborating with cross-functional teams to understand data requirements, ensuring data quality, integrity, and security, automating repetitive data tasks, monitoring and troubleshooting production data pipelines, and documenting systems, processes, and best practices. To excel in this role, you should possess a Bachelor's/Master's degree in Computer Science, Information Technology, or a related field, along with at least 2 years of experience as a Data Engineer or in a similar role. Proficiency in SQL, Python, or Scala is essential, as well as experience with data pipeline tools like Apache Airflow and familiarity with big data tools such as Hadoop and Spark. Hands-on experience with cloud platforms like AWS, GCP, or Azure is preferred, along with knowledge of data warehouse solutions like Snowflake, Redshift, or BigQuery. Preferred qualifications include knowledge of CI/CD for data applications, experience with containerization tools like Docker and Kubernetes, and exposure to data governance and compliance standards. If you are ready to be part of a data-driven transformation journey, apply now to join our team at Synram Software Pvt Ltd. For inquiries, contact us at career@synram.co or +91-9111381555. Benefits of this full-time, permanent role include a flexible schedule, internet reimbursement, leave encashment, day shift with fixed hours and weekend availability, joining bonus, and performance bonus. The ability to commute/relocate to Gwalior, Madhya Pradesh, is preferred. Don't miss the opportunity to contribute your expertise to our dynamic team. The application deadline is 20/07/2025, and the expected start date is 12/07/2025. We look forward to welcoming you aboard for a rewarding and challenging career in data engineering.,

Posted 1 week ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies