Jobs
Interviews

3171 Data Engineering Jobs - Page 25

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

8.0 - 12.0 years

0 - 0 Lacs

Hyderabad

Hybrid

Job Title: Lead Data Engineer Experience : 8+ years Job Type: Hybrid-3 days Location: Hyderabad Contract: 6+ months Mandatory Skills : Python, SQL, Snowflake (3+ years on each skill) Required Skills: Senior developer with approximately 8 years of experience in Data engineering Background in medium to large-scale client environments working on at least 3 or 4 projects Strong expertise in Data Engineering, ETL/ELT workflows Solid understanding of database concepts and data modeling Proficient in SQL, PL/SQL, and Python Snowflake experience (3+ years) with base or advanced certification Excellent communication skills ( written and verbal) Ability to work independently and proactively

Posted 1 week ago

Apply

1.0 - 2.0 years

3 - 4 Lacs

Bengaluru

Work from Office

Spark ML Lib,Scala,Python,Databricks on AWS , Snowflake , GitLab , Jenkins , AWS DevOps CI/CD pipeline,Machine Learning,Airflow We are seeking a highly skilled and motivated Machine Learning Engineer to join our dynamic team. The Machine Learning Engineer will be responsible for designing, developing, and deploying machine learning models to solve complex problems and enhance our products or services. The ideal candidate will have a strong background in machine learning algorithms, programming, and data analysis.Responsibilities : Problem Definition : Collaborate with cross-functional teams to define and understand business problems suitable for machine learning solutions.Translate business requirements into machine learning objectives.Data Exploration and Preparation : Analyze and preprocess large datasets to extract relevant features for model training.Address data quality issues and ensure data readiness for machine learning tasks.Model Development : Develop and implement machine learning models using state-of-the-art algorithms.Experiment with different models and approaches to achieve optimal performance.Training and Evaluation : Train machine learning models on diverse datasets and fine-tune hyperparameters.Evaluate model performance using appropriate metrics and iterate on improvements.Deployment : Deploy machine learning models into production environments.Collaborate with DevOps and IT teams to ensure smooth integration.Monitoring and Maintenance : Implement monitoring systems to track model performance in real-time.Regularly update and retrain models to adapt to evolving data patterns.Documentation : Document the entire machine learning development pipeline, from data preprocessing to model deployment.Create user guides and documentation for end-users and stakeholders.Collaboration : Collaborate with data scientists, software engineers, and domain experts to achieve project goals.Participate in cross-functional team meetings and knowledge-sharing sessions.

Posted 1 week ago

Apply

4.0 - 5.0 years

6 - 7 Lacs

Bengaluru

Work from Office

Data Engineer Skills required : Bigdata Workflows (ETL/ELT), Python hands-on, SQL hands-on, Any Cloud (GCP & BigQuery preferred), Airflow (good knowledge on Airflow features, operators, scheduling etc) Skills that would add advantage : DBT, Kafka Experience level : 4 5 years NOTE Candidate will be having the coding test (Python and SQL) in the interview process. This would be done through coders-pad. Panel would set it at run-time.

Posted 1 week ago

Apply

8.0 - 13.0 years

8 - 18 Lacs

Chennai

Work from Office

About the Role 7+ years of experience in managing Data & Analytics service delivery, preferably within a Managed Services or consulting environment. Responsibilities Serve as the primary owner for all managed service engagements across all clients, ensuring SLAs and KPIs are met consistently. Continuously improve the operating model, including ticket workflows, escalation paths, and monitoring practices. Coordinate triaging and resolution of incidents and service requests raised by client stakeholders. Collaborate with client and internal cluster teams to manage operational roadmaps, recurring issues, and enhancement backlogs. Lead a >40 member team of Data Engineers and Consultants across offices, ensuring high-quality delivery and adherence to standards. Support transition from project mode to Managed Services including knowledge transfer, documentation, and platform walkthroughs. Ensure documentation is up to date for architecture, SOPs, and common issues. Contribute to service reviews, retrospectives, and continuous improvement planning. Report on service metrics, root cause analyses, and team utilization to internal and client stakeholders. Participate in resourcing and onboarding planning in collaboration with engagement managers, resourcing managers and internal cluster leads. Act as a coach and mentor to junior team members, promoting skill development and strong delivery culture. Qualifications ETL or ELT: Azure Data Factory, Databricks, Synapse, dbt (any two – Mandatory). Data Warehousing: Azure SQL Server/Redshift/Big Query/Databricks/Snowflake (Anyone - Mandatory). Data Visualization: Looker, Power BI, Tableau (Basic understanding to support stakeholder queries). Cloud: Azure (Mandatory), AWS or GCP (Good to have). SQL and Scripting: Ability to read/debug SQL and Python scripts. Monitoring: Azure Monitor, Log Analytics, Datadog, or equivalent tools. Ticketing & Workflow Tools: Freshdesk, Jira, ServiceNow, or similar. DevOps: Containerization technologies (e.g., Docker, Kubernetes), Git, CI/CD pipelines (Exposure preferred). Required Skills Strong understanding of data engineering and analytics concepts, including ELT/ETL pipelines, data warehousing, and reporting layers. Experience in ticketing, issue triaging, SLAs, and capacity planning for BAU operations. Hands-on understanding of SQL and scripting languages (Python preferred) for debugging/troubleshooting. Proficient with cloud platforms like Azure and AWS; familiarity with DevOps practices is a plus. Familiarity with orchestration and data pipeline tools such as ADF, Synapse, dbt, Matillion, or Fabric. Understanding of monitoring tools, incident management practices, and alerting systems (e.g., Datadog, Azure Monitor, PagerDuty). Strong stakeholder communication, documentation, and presentation skills. Experience working with global teams and collaborating across time zones.

Posted 1 week ago

Apply

12.0 - 20.0 years

0 Lacs

mysore, karnataka

On-site

The Group Product Manager will lead the strategic development and enhancement of our proprietary business intelligence platform, iSOCRATES MADTechAI, as well as other innovative products. This role demands a deep understanding of technology, strong analytical skills, and a collaborative mindset to evaluate product potential, oversee the product lifecycle, and ensure alignment with both client-partner and internal needs. Lead the strategic vision and execution of iSOCRATES MADTechAI, focusing on feature enhancements and user experience improvements. Conduct market research to identify customer needs within the AdTech, MarTech, and DataTech landscapes, translating them into actionable product requirements. Prioritize product features based on business impact, customer feedback, and technical feasibility. Oversee the entire product development lifecycle, including conception, design, development, testing, and launch phases. Utilize Agile methodologies (SCRUM, Kanban) to facilitate iterative development and continuous improvement. Manage roadmaps, timelines, and deliverables using tools like Jira, ensuring projects are on track and risks are mitigated. SaaS Development: Deep understanding of SaaS architecture, deployment, and lifecycle management. Cloud Platforms: Proficiency with cloud platforms (AWS required; Google Cloud and Azure preferred). AI and Machine Learning: Extensive experience with AI/ML concepts, tools, and frameworks (e.g., TensorFlow, PyTorch, Scikit-learn) and their application in product development. Data Engineering: Strong knowledge of data engineering principles, including ETL processes, data pipelines, and data modeling to ensure data integrity and availability for analytics. Data Analytics: Strong knowledge of data analytics, data warehousing, and business intelligence tools (e.g., SQL, Tableau, PowerBI, Sisense). Natural Language Processing (NLP): Familiarity with NLP techniques and applications in product features to enhance user engagement and insights. Microservices Architecture: Experience designing and implementing microservices architectures to enhance product scalability and maintainability. ReactJS Technologies: Proficiency in ReactJS and related frameworks to ensure seamless front-end development and integration with back-end services. Collaborate with engineering teams to define system architecture and design concepts that align with best practices in UX/UI. Ensure the integration of various technologies, including APIs, AngularJS, Node.js, ReactJS, and MVC architecture into product offerings. Strong hands-on experience in Product-Led Growth (PLG) strategies and Partner/Channel go-to-market approaches. Partner closely with the U.S. and India-based Partner Success teams to support pre-sales activities and customer engagement, acting as a subject matter expert in AdTech, MarTech, and DataTech. Facilitate communication between product, engineering, marketing, and sales teams to ensure cohesive product strategy and execution. Engage with external customers to gather feedback and drive product iterations. Design and implement client data analysis methodologies, focusing on data-driven decision-making processes relevant to AdTech, MarTech, and DataTech. Develop analytics frameworks that leverage data science principles and advanced statistical methods to derive actionable insights for clients. Monitor product performance metrics and develop KPIs to assess impact and identify areas for improvement, leveraging A/B testing and experimentation techniques. Establish and refine processes for product management, ensuring repeatability and scalability. Lead initiatives to enhance existing workflows, focusing on efficiency and effectiveness in product delivery. Create and present progress reports, updates, and presentations to senior management and stakeholders. Bachelors or Masters degree in Computer Science, Data Science, or a related quantitative field. MBA or specialized training in product management or data science is preferred. 12 to 20 years of experience in technology product engineering and development, with a minimum of 10 years in product management. Proven track record in managing complex products, especially in business intelligence or marketing technology domains. Strong proficiency in BI platforms (e.g., Sisense, Tableau, PowerBI, Looker, DOMO) and data visualization tools. Deep understanding of cloud platforms (AWS, Snowflake) and experience with database query languages (SQL, NoSQL). Expertise in API development and management, along with knowledge of front-end technologies (AngularJS, ReactJS, Bootstrap). In-depth knowledge of AI and NLP technologies, with experience in applying them to enhance product functionality. Strong background in data engineering, including ETL processes, data warehousing, and data pipeline management. Must have a strong understanding of digital advertising, including AdTech, MarTech, and DataTech technologies. Experience in B2C and B2B SaaS product development, particularly in customer journey mapping and email marketing. Strong analytical and problem-solving abilities, with a focus on data-driven outcomes. Excellent communication and presentation skills, capable of articulating complex ideas to diverse audiences. Collaborative and open-minded, fostering a culture of innovation and accountability. High energy and enthusiasm for driving product success in a fast-paced environment. Have extensive experience with Atlassian products including JIRA and Confluence. Have extensive experience with Product Management and Monitoring Software. Must be ready to relocate to Mysuru or Bengaluru.,

Posted 1 week ago

Apply

1.0 - 5.0 years

0 Lacs

punjab

On-site

ABOUT XENONSTACK XenonStack is the fastest-growing data and AI foundry for agentic systems, which enables people and organizations to gain real-time and intelligent business insights. The company specializes in building Agentic Systems for AI Agents, Vision AI Platform, and Inference AI Infrastructure. For more information, you can visit their respective websites: https://www.akira.ai, https://www.xenonstack.ai, and https://www.nexastack.ai. THE OPPORTUNITY XenonStack is seeking a creative Technical Content Creator to develop high-quality technical content, including documentation, blog posts, and tutorials. The ideal candidate should have a strong understanding of technology and a passion for writing. If you possess these qualities, we welcome you to join our team. JOB ROLES AND RESPONSIBILITIES - Research, create, and edit technical content that explains complex SaaS and cloud-native solutions in an easily understandable manner. - Write blog posts, articles, case studies, and product documentation on cloud technologies, platform engineering, and data-driven analytics. - Collaborate with technical teams, including engineers and product managers, to gather insights and ensure the accuracy of the content. - Develop content that helps customers understand the benefits of SaaS and data-centric solutions and supports their decision-making process. - Optimize content for SEO to increase organic visibility and drive traffic to the company's website. - Assist in writing technical product manuals, user guides, and knowledge base articles. - Maintain a consistent voice and style across all content to reflect XenonStack's brand. - Stay updated with the latest trends in SaaS, cloud computing, and data technologies to ensure content relevance. - Review and update content regularly to ensure it remains accurate and up to date. SKILLS REQUIREMENTS - Proven experience in writing clear, concise, and technically accurate content for SaaS and tech-focused audiences. - Knowledge of cloud computing, SaaS platforms, and other technical domains such as data engineering, AI, and analytics. - Excellent writing, editing, and proofreading skills with attention to detail. - Strong understanding of SEO best practices and content optimization. - Ability to work with subject matter experts to understand technical concepts and convey them to a non-technical audience. - Familiarity with content management systems (CMS) and collaboration tools like Google Docs and WordPress. - Strong research skills and the ability to keep up with new developments in technology. - Ability to manage multiple projects simultaneously while meeting deadlines. - A proactive attitude and the ability to work independently and as part of a team. CAREER GROWTH & BENEFITS XenonStack offers various opportunities for continuous learning and growth through access to certification courses and skill-building sessions. Employees also have exposure to cutting-edge technologies and business analysis strategies. Additionally, the company provides recognition and rewards based on performance, constructive feedback, incentives, and special recognition for outstanding contributions. Comprehensive medical insurance, additional allowances, and cab facilities for women employees are part of the work benefits to ensure well-being. XENONSTACK CULTURE - JOIN US & MAKE AN IMPACT! XenonStack fosters a culture of cultivation with bold, courageous, and human-centric leadership principles. The company values obsession and deep work in everything they do. With a mission to disrupt and reshape the category, XenonStack welcomes individuals with a mindset and ambition aligned with their vision. The company focuses on simplifying the user experience with AI agents and Agentic AI, driven by the values of Obsessed with Adoption and Obsessed with Simplicity. Join XenonStack in accelerating the world's transition to AI + Human Intelligence.,

Posted 1 week ago

Apply

5.0 - 10.0 years

0 - 0 Lacs

chennai

Remote

Job Title: Data Engineer PySpark & AWS Location: Chennai Employment Type: Full-Time with Artech Experience Level: 4-10 years About the Role: We are seeking a highly skilled Data Engineer with strong expertise in PySpark and AWS to join our growing data team. In this role, you will be responsible for building, optimizing, and maintaining data pipelines and ETL workflows on the cloud, enabling large-scale data processing and analytics. You will work closely with data scientists, analysts, and business stakeholders to ensure data is accessible, accurate, and reliable for advanced analytics and reporting. Key Responsibilities: Design, build, and maintain scalable and efficient data pipelines using PySpark and Apache Spark . Develop and manage ETL/ELT workflows to ingest data from multiple structured and unstructured sources. Implement data transformation, cleansing, validation, and aggregation logic. Work with AWS cloud services such as S3, Glue, EMR, Lambda, Redshift, Athena , and CloudWatch . Monitor data pipelines for performance, reliability, and data quality . Collaborate with cross-functional teams to understand business data needs and translate them into technical solutions. Automate data engineering tasks and infrastructure using tools like Terraform or CloudFormation (optional). Maintain and document data architecture, job logic, and operational processes. Required Skills: 4+ years of experience as a Data Engineer or in a similar role. Strong hands-on experience with PySpark and Apache Spark for distributed data processing. Proficiency in Python programming for data manipulation and automation. Solid understanding of AWS services for data engineering: S3, Glue, EMR, Redshift, Lambda, Athena, CloudWatch Experience with SQL and relational databases (e.g., PostgreSQL, MySQL). Knowledge of data modeling, warehousing, and partitioning strategies . Experience with version control (Git) and CI/CD practices . Nice to Have: Experience with workflow orchestration tools (e.g., Airflow, Step Functions). Familiarity with Docker/Kubernetes for containerized deployments. Exposure to NoSQL databases (DynamoDB, MongoDB). Experience with Terraform or CloudFormation for infrastructure automation. Knowledge of Delta Lake and data lake architecture best practices. Educational Qualifications: Bachelors or Masters degree in Computer Science, Information Technology, Engineering , or a related field.

Posted 1 week ago

Apply

5.0 - 10.0 years

0 - 0 Lacs

gurugram

On-site

AWS Data Engineer Requirements 3+ year of experience working on AWS Cloud platform with strong experience in Python Knowledge on AWS services such as - AWS S3, Glue, API Gateway, Crawler, Athena, Cloudwatch, Lambda, EC2, Route53, IAM, Aurora/Dynamic DB, Redshift is an advantage Knowledge/experience on CI/CD deployment tools (CloudFormation, Terraform, Jenkins) Experience/knowledge with streaming technologies is must preferably Kafka Should have knowledge/experience with SQL Experience developing REST API Good analytical skills Knowledge or experience developing microservices Familiar working on Linux platforms Have good understanding on pros and cons and the cost impact of the AWS services being leveraged Good Communication skills

Posted 1 week ago

Apply

10.0 - 14.0 years

0 Lacs

noida, uttar pradesh

On-site

You have around 10 years of experience in AI/ML, Generative AI, and automation domains. As a highly skilled and visionary Solution Architect, your role will involve leading the design and implementation of cutting-edge solutions in Artificial Intelligence (AI), Generative AI, and Automation. You will need a combination of technical expertise, strategic thinking, and leadership to drive innovation and deliver scalable, high-impact solutions across the organization. Your key responsibilities will include designing end-to-end AI and automation solutions that are aligned with business goals and technical requirements. You will participate in business requirements discussions to arrive at solutions and technical architecture that consider all aspects of the business problem. It will be your responsibility to define architecture blueprints, integration patterns, and data pipelines for AI-driven systems. Additionally, you will evaluate and select appropriate AI models, automation tools (e.g., RPA, BPM), and cloud platforms to ensure scalability, security, and performance of deployed solutions. Staying current with emerging technologies and industry trends in AI, ML, NLP, computer vision, and automation will also be crucial. Providing technical leadership and mentorship to engineering teams, as well as participating in pre-sales and client engagements to define solution strategies and roadmaps, will be part of your role. The required qualifications for this position include a Bachelor's or Master's degree in computer science, Engineering, or a related field. You should have proven experience in architecting and deploying Generative AI solutions in production environments in Azure, GCP, AWS, etc. Strong knowledge of AI/ML frameworks such as TensorFlow, PyTorch, Scikit-learn, experience with automation platforms like UiPath, Automation Anywhere, Power Automate, proficiency in cloud platforms (AWS, Azure, or GCP) and containerization (Docker, Kubernetes), solid understanding of data engineering, APIs, and microservices architecture, as well as excellent communication and stakeholder management skills. Preferred qualifications for this position include certifications in cloud architecture or AI/ML (e.g., AWS Certified Machine Learning, Azure AI Engineer), experience with MLOps, model monitoring, and CI/CD pipelines, as well as familiarity with ethical AI practices and data privacy regulations.,

Posted 1 week ago

Apply

3.0 - 7.0 years

0 Lacs

maharashtra

On-site

You are a strategic thinker passionate about driving solutions and innovation mindset. You have found the right team. As a Data Engineer in our STO team, you will be a strategic thinker passionate about promoting solutions using data. You will mine, interpret, and clean our data, asking questions, connecting the dots, and uncovering hidden opportunities for realizing the data's full potential. As part of a team of specialists, you will slice and dice data using various methods and create new visions for the future. Our STO team is focused on collaborating and partnering with business to deliver efficiency and enhance controls via technology adoption and infrastructure support for Global Finance & Business Management India. Job Responsibilities: - Write efficient Python and SQL code to extract, transform, and load (ETL) data from various sources into Databricks. - Perform data analysis and computation to derive actionable insights from the data. - Collaborate with data scientists, analysts, and other stakeholders to understand data requirements and deliver solutions. - Ensure data quality, integrity, and security across all data processes. - Develop optimized solutions for performance and scalability. - Monitor and troubleshoot data workflows to ensure reliability and efficiency. - Document data engineering processes, methodologies, and workflows. - Communicate analytical findings to senior leaders through data visualization and storytelling. Required qualifications, capabilities and skills: - Minimum 3+ years of hands-on experience in developing, implementing, and maintaining Python automation solutions including the use of LLM. - Develop, implement, and maintain new and existing solutions. - Write efficient Python and SQL code to extract, transform, and load (ETL) data from various sources. - Ability to use LLM to build AI solutions. - Perform data analysis and computation to derive actionable insights from the data. - Collaborate with data scientists, analysts, and other stakeholders to understand data requirements and deliver solutions. - Ensure data quality, integrity, and security across all data processes. - Monitor and troubleshoot data workflows to ensure reliability and efficiency. - Document data engineering processes, methodologies, and workflows. Preferred qualifications, capabilities and skills: - Hands-on experience in Python desktop solution development. - Knowledge of machine learning and data science concepts will be a plus. - Experience with data visualization tool Tableau will be a plus.,

Posted 1 week ago

Apply

4.0 - 8.0 years

0 Lacs

karnataka

On-site

The project duration for this role is 6 months with a monthly rate of 1.60 Lac. The ideal candidate should possess 4-7 years of experience and the work location is in Bangalore with a Hybrid setup. Key Responsibilities: - Demonstrated strong proficiency in Python, LLMs, Lang Chain, Prompt Engineering, and related Gen AI technologies. - Proficiency in working with Azure Databricks. - Ability to showcase strong analytical skills, problem-solving capabilities, and effective stakeholder communication. - A solid understanding of data governance frameworks, compliance requirements, and internal controls. - Hands-on experience in data quality rule development, profiling, and implementation. - Familiarity with Azure Data Services such as Data Lake, Synapse, and Blob Storage. Preferred Qualifications: - Previous experience in supporting AI/ML pipelines, particularly with GenAI or LLM based models. - Proficiency in Python, PySpark, SQL, and knowledge of Delta Lake architecture. - Hands-on experience with Azure Data Lake, Azure Data Factory, and Azure Synapse Analytics. - Prior experience in data engineering, with a strong expertise in Databricks.,

Posted 1 week ago

Apply

6.0 - 10.0 years

0 Lacs

pune, maharashtra

On-site

The Applications Development Senior Programmer Analyst position is an intermediate level role where you will be responsible for collaborating with the Technology team to establish and implement new or updated application systems and programs. Your main objective will be to contribute to applications systems analysis and programming activities. Your responsibilities will include conducting tasks related to feasibility studies, estimating time and cost, IT planning, risk technology, applications development, and implementing new or revised applications systems and programs to meet specific business needs. You will also be responsible for monitoring and controlling all phases of the development process, providing user and operational support, analyzing complex problems, recommending security measures, and consulting with users/clients and technology groups. As an Applications Development Senior Programmer Analyst, you will need to have 6+ years of relevant experience in product engineering or banking background, a strong interest in programming, good problem-solving skills, and keen awareness in using GenAI tools throughout SDLC. You should also have a desire to improve developer productivity continuously, focus on fundamental concepts like SOLID principles, Data Structures and Algorithms, and have a good understanding of Java8+, Spring framework, build tools, testing frameworks, and databases. Python knowledge is considered an added advantage. Additionally, you should have experience in systems analysis and programming of software applications, managing and implementing successful projects, working knowledge of consulting/project management techniques, and the ability to work under pressure and manage deadlines effectively. This role does not require a high level of direct supervision, and you should be able to operate independently with autonomy. You will also serve as a subject matter expert to senior stakeholders and team members. It is important to assess risks when making business decisions and adhere to compliance with applicable laws, rules, and regulations. The ideal candidate for this role should have a Bachelor's degree or equivalent experience. This job description provides a general overview of the work involved, and additional job-related duties may be assigned as necessary. If you are looking to join a dynamic team where you can contribute your expertise in applications development and programming, this position may be the right fit for you.,

Posted 1 week ago

Apply

5.0 - 10.0 years

20 - 35 Lacs

Bengaluru

Work from Office

About Chubb Chubb is a world leader in insurance. With operations in 54 countries and territories, Chubb provides commercial and personal property and casualty insurance, personal accident and supplemental health insurance, reinsurance and life insurance to a diverse group of clients. The company is defined by its extensive product and service offerings, broad distribution capabilities, exceptional financial strength and local operations globally. Parent company Chubb Limited is listed on the New York Stock Exchange (NYSE: CB) and is a component of the S&P 500 index. Chubb employs approximately 43,000 people worldwide. Additional information can be found at: www.chubb.com. About Chubb India At Chubb India, we are on an exciting journey of digital transformation driven by a commitment to engineering excellence and analytics. We are proud to share that we have been officially certified as a Great Place to Work for the third consecutive year, a reflection of the culture at Chubb where we believe in fostering an environment where everyone can thrive, innovate, and grow. With a team of over 2500 talented professionals, we encourage a start-up mindset that promotes collaboration, diverse perspectives, and a solution-driven attitude. We are dedicated to building expertise in engineering, analytics, and automation, empowering our teams to excel in a dynamic digital landscape. We offer an environment where you will be part of an organization that is dedicated to solving real-world challenges in the insurance industry. Together, we will work to shape the future through innovation and continuous learning. Position Details: Function/Department : Advanced Analytics Location : Bangalore, India Employment Type: Full-time Role Overview – Full stack Data Scientist We are seeking a full stack data scientist in Advanced Analytics team, who will be at the foreftont of developing new innovative data driven solutions with bleeding edge machine learning and AI solution end to end. AIML Data Scientist is a technical job that uses AI & machine learning techniques to automate underwriting processes, improve claims outcomes and/or risk solutions. This person will develop vibrant data science solutions which require data engineering, AlML algorithms and Ops engineering skills to develop and deploy it for the business. Ideal candidate for this role is someone with a strong education in computer science, data science, statistics, applied math or a related field, and who is eager to tackle problems with innovative thinking without compromising detail business insights. You are adept at solving diverse problems by utilizing a variety of different tools, strategies, machine learning techniques, algorithms and programming languages. Major Responsibilities Work with business partners globally, determine analyses to be performed, manage deliverables against timelines, present of results and implement the model. Use broad spectrum of Machine Learning, text and image AI models to extract impactful features from structured/unstructured data. Develop and implement models that help with automating, getting insights, make smart decisions; Ensure that the model is able to meet the desired KPIs post-production. Develop and deploy scalable and efficient machine learning models. Package and publish codes and solutions in reusable format python package format- (Pypi, Scikit-learn pipeline,..) Keep the code ready for seamless building of CI/CD pipelines and workflows for machine learning applications. Ensure high quality code that meets business objectives, quality standards and secure web development guidelines. Building reusable tools to streamline the modeling pipeline and sharing knowledge Build real-time monitoring and alerting systems for machine learning systems. Develop and maintain automated testing and validation infrastructure. Troubleshoot pipelines across multiple touchpoints like CI Server, Artifact storage and Deployment cluster. Implement best practices for versioning, monitoring and reusability. Skills and Qualifications: Sound understanding of ML concepts, Supervised / Unsupervised Learning, Ensemble Techniques, Hyperparameter Good knowledge of Random Forest, XGBoost, SVM, Clustering, building data pipelines in Azure/Databricks, deep learning models, OpenCV, Bert and new transformer models for NLU, LLM application in ML> Strong experience with Azure cloud computing and containerization technologies (like Docker, Kubernetes). 4-6 years of experience in delivery end to end data science models. Experience with Python/OOPs programming languages and data science frameworks like (Pandas, Numpy, TensorFlow, Keras, PyTorch, sklearn). Knowledge of DevOps tools such as Git, Jenkins, Sonar, Nexus is must. Building python wheels and debugging build process. Data pipeline building and debugging (by creating and following log traces). Basic knowledge of DevOps practices. Concepts of Unit Testing and Test-Driven development. SDE skills like OOP and Functional programming are an added advantage. Experience with Databricks and its ecosystem is an added advantage. analytics/statistics/mathematics or related domain.

Posted 1 week ago

Apply

3.0 - 7.0 years

0 Lacs

hyderabad, telangana

On-site

You will be responsible for designing, building, and deploying scalable NLP/ML models for real-world applications. Your role will involve fine-tuning and optimizing Large Language Models (LLMs) using techniques like LoRA, PEFT, or QLoRA. You will work with transformer-based architectures such as BERT, GPT, LLaMA, and T5, and develop GenAI applications using frameworks like LangChain, Hugging Face, OpenAI API, or RAG (Retrieval-Augmented Generation). Writing clean, efficient, and testable Python code will be a crucial part of your tasks. Collaboration with data scientists, software engineers, and stakeholders to define AI-driven solutions will also be an essential aspect of your work. Additionally, you will evaluate model performance and iterate rapidly based on user feedback and metrics. The ideal candidate should have a minimum of 3 years of experience in Python programming with a strong understanding of ML pipelines. A solid background and experience in NLP, including text preprocessing, embeddings, NER, and sentiment analysis, are required. Proficiency in ML libraries such as scikit-learn, PyTorch, TensorFlow, Hugging Face Transformers, and spaCy is essential. Experience with GenAI concepts, including prompt engineering, LLM fine-tuning, and vector databases like FAISS and ChromaDB, will be beneficial. Strong problem-solving and communication skills are highly valued, along with the ability to learn new tools and work both independently and collaboratively in a fast-paced environment. Attention to detail and accuracy is crucial for this role. Preferred skills include theoretical knowledge or experience in Data Engineering, Data Science, AI, ML, RPA, or related domains. Certification in Business Analysis or Project Management from a recognized institution is a plus. Experience in working with agile methodologies such as Scrum or Kanban is desirable. Additional experience in deep learning and transformer architectures and models, prompt engineering, training LLMs, and GenAI pipeline preparation will be advantageous. Practical experience in integrating LLM models like ChatGPT, Gemini, Claude, etc., with context-aware capabilities using RAG or fine-tuning models is a plus. Knowledge of model evaluation and alignment, as well as metrics to calculate model accuracy, is beneficial. Data curation from sources for RAG preprocessing and development of LLM pipelines is an added advantage. Proficiency in scalable deployment and logging tooling, including skills like Flask, Django, FastAPI, APIs, Docker containerization, and Kubeflow, is preferred. Familiarity with Lang Chain, LlamaIndex, vLLM, HuggingFace Transformers, LoRA, and a basic understanding of cost-to-performance tradeoffs will be beneficial for this role.,

Posted 1 week ago

Apply

4.0 - 8.0 years

0 Lacs

pune, maharashtra

On-site

You will be responsible for developing, optimizing, and maintaining business intelligence and data warehouse systems to ensure secure and efficient data storage and retrieval. Your role will involve enabling self-service data exploration, supporting stakeholders with insightful reporting and analysis, and ensuring the closure of the job posting by the specified deadline. Support the development and maintenance of business intelligence and analytics systems aligned with business requirements. Design and optimize data warehouse architecture for efficient storage and retrieval of large datasets. Enable self-service data exploration for users and develop reporting applications to generate insights for stakeholders. Implement data models and security strategies to safeguard sensitive information. Your responsibilities will include optimizing business intelligence processes, providing training and support on BI tools, and collaborating with vendors for successful implementation. You should possess a Bachelor's degree in Computer Science, MIS, Mathematics, Statistics, or related fields. Master's degree or PhD is preferred. Relevant work experience in data engineering is required based on the specified number of years for each level. Key Skills and Abilities: - Fluency in English - Analytical and Numerical Skills - Accuracy, Attention to Detail, and Planning - Presentation Skills - Data Modeling, Database Design, and ETL - Programming Skills FedEx is an equal opportunity employer committed to diversity, equity, and inclusion. All qualified applicants will be considered regardless of age, race, gender, disability, or other protected characteristics. FedEx values its employees and fosters a culture of service, profit, and people, investing in their growth and well-being.,

Posted 1 week ago

Apply

5.0 - 9.0 years

0 Lacs

pune, maharashtra

On-site

At NiCE, we challenge our limits and strive to be game changers in everything we do. If you are ambitious, innovative, and always play to win, we have the ultimate career opportunity that will ignite your passion for excellence. We are currently looking for an experienced AI Architect who possesses not only strategic thinking skills but also a hands-on approach to coding. In this role, you will be involved in both proof-of-concept (POC) and production-grade AI projects. Your responsibilities will include mentoring team members, establishing ethical AI practices, and making critical decisions regarding AI deployment strategies. It is essential to have a strong background in deploying AI solutions on cloud platforms like Azure or AWS, as well as expertise in building secure and compliant data and machine learning pipelines. As a proactive leader, you will be responsible for bridging the gap between innovation and execution while ensuring scalability, security, and governance in AI systems. Your impact will be significant as you: - Build scalable AI systems and infrastructure capable of handling large datasets, ensuring performance, reliability, and maintainability. - Lead the development of secure and compliant data and machine learning pipelines, aligning with data governance and regulatory standards. - Design, develop, and implement AI models and algorithms to solve real-world business problems. - Mentor team members on AI technologies, best practices, and system architecture. - Collaborate with stakeholders to identify AI-driven innovation opportunities and translate business requirements into technical solutions. - Promote ethical and responsible AI practices across the organization. - Take ownership of strategic decisions related to AI deployment and lifecycle management. - Conduct research and implement machine learning algorithms, including Retrieval-Augmented Generation (RAG) techniques. - Develop AI applications using modern frameworks and run experiments to enhance model performance. - Define and implement AI project Software Development Lifecycle (SDLC) processes. To be successful in this role, you should have: - Bachelors or Masters degree in Computer Science, Data Science, Artificial Intelligence, or related field. - Proven experience as an AI Architect with a track record of deploying AI solutions in production. - Strong expertise in AI/ML technologies, cloud platforms, and secure data management. - Proficiency in programming languages such as Python, .NET, and AI/ML frameworks. - Experience with AI project SDLC, CI/CD for ML, and AI testing strategies. - Familiarity with DevOps and Data Engineering tools and practices. - Strong analytical and problem-solving skills. - Excellent communication skills to convey complex technical concepts. Join NiCE, a global company where innovation and collaboration thrive. Embrace the NICE-FLEX hybrid work model for maximum flexibility and endless opportunities for growth and development. If you are passionate, innovative, and ready to raise the bar, come join us at NiCE! Requisition ID: 7474 Reporting into: Tech Manager Role Type: Individual Contributor About NiCE: NICELtd. (NASDAQ: NICE) is a global leader in software products used by over 25,000 businesses worldwide. With a focus on delivering exceptional customer experiences and ensuring public safety, NiCE is known for its innovation in AI, cloud, and digital domains. Join our team of over 8,500 employees across 30+ countries and be part of our journey towards excellence.,

Posted 1 week ago

Apply

5.0 - 9.0 years

0 Lacs

kochi, kerala

On-site

At EY, you'll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture, and technology to become the best version of you. And we're counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. This is an opportunity to become part of the Global Talent Delivery Team responsible for ensuring alignment of talent solutions, processes, and data, enabling continuous improvement through digital enablement and deliver management information and predictive insights. The successful candidate will join the Talent Insights and Analytics Team - Data & Reporting Analysis, a key subfunction, who will build key relationships, deliver reporting and analytics services to Talent teams globally. Your key responsibilities include collaborating with Talent Insights and Analytics - Business Consulting and Analytics & Planning teams to build and enable reporting services at scale. You will support the delivery of advanced and predictive reporting techniques to deliver robust analyses and support the delivery of insights to the Talent Executive teams. Additionally, you will ensure consistent delivery of reports, compliance/legal reporting, strategic reporting, ad-hoc analysis, technical/complex requests, SuccessFactors (SF) report development, management reporting/cross-functional reports. You will configure and create new reporting and analysis to meet operational and management demands. Moreover, you will provide better managerial insights to stakeholders through integrated and standardized data reports and dashboards. Other responsibilities include collaborating with extended teams to ensure effective execution of technology implementation, driving quality and performance standards. You will also work closely with the teams across the Talent Delivery and wider Talent Functions for configuration, development, testing, and implementation of technological solutions that support business and functional delivery. Additionally, you should have the ability to develop people, skills in coaching, mentoring, and learning on the job, as well as effectiveness in building trust, respect, and cooperation among teams. To qualify for the role, you must bring deep knowledge of the reporting and analytics operating model, organization design, and ways of working across the talent eco-system. You should have strong business acumen, ability to understand Talent systems landscape, and consider the functionality and integration requirements in line with the capabilities required to implement reporting and data analytics priorities. Strong communication skills, organizational skills, research, and analytical skills are essential for this role. The ideal candidate will also have experience in delivering functional reporting solutions for business, supporting reporting capabilities and its implementation in a relevant industry or consulting environment, and collaborating with talent colleagues to understand needs/requirements. Proficiency in technologies such as SuccessFactors, Excel, SQL, Power BI, and other reporting tools is required. Additionally, having a degree level education is necessary for this role. In conclusion, Talent Insights and Analytics - Data & Reporting Analysis team is looking for an individual with the skills and experience required, who can work effectively with the team, take charge of personal development, and exceed expectations to help EY build a better working world.,

Posted 1 week ago

Apply

8.0 - 12.0 years

0 Lacs

hyderabad, telangana

On-site

Join us and contribute to the discovery of medicines that will impact lives! Aganitha is accelerating drug discovery and development for Biopharma and Biotech R&D with in silico solutions leveraging Computational Biology & Chemistry, High throughput Sciences, AI, ML, HPC, Cloud, Data, and DevOps. In silico solutions are transforming the biopharma and biotech industries. Our cross-domain science and technology team of experts embark upon and industrialize this transformation. We continually expand our world-class multi-disciplinary team in Genomics, AI, and Cloud computing, accelerating drug discovery and development. What drives us is the joy of working in an innovation-rich, research-powered startup bridging multiple disciplines to bring medicines faster for human use. We are working with several innovative Biopharma companies and expanding our client base globally. Read about how and what solutions we build. Aganitha, which means countless or limitless in Sanskrit, serves as a reminder and inspiration about the limitless potential in each one of us. Come join us to bring out your best and be limitless! Key Responsibilities: - Engage and co-innovate with R&D teams of our Biopharma clients, and collaborate with a team of Biologists, Medicinal Chemists, Computational Chemists, Computational Biologists, Data Scientists, Data Engineers, and Web Developers to create the next generation of innovative computational solutions for drug discovery and development. - Lead architecture initiatives from inception to completion - Design and oversee implementation of solutions for BioPharma R&D - Manage Engineering teams using Agile methodologies - Enhance reuse with platforms, frameworks, and libraries - Collaborate with business users to create architecture in alignment with a business need - Drive scope definition, requirements analysis, functional and technical design, application build, product configuration, unit testing, and production deployment and ensure delivered solutions meet/perform to technical and functional/non-functional requirements. - Champion the adoption of reusable architecture assets to improve efficiency - Produce documentation to aid in the understanding of existing architecture solutions - Advise our technical teams to optimize the recommended product suite in defining, building, and deploying end-to-end solutions - Lead the translation of the clients business requirements into systems design - Drive the development of prototypes and technology demonstrators within microservice architecture, cloud-native solutions - Design and document system installation and upgrade guides, specifications, and related standards and guide technical team, application specialists - Design application user guides, document common workflows, prepare training materials and conduct or participate in knowledge-sharing sessions with technical teams - Design future state reference architectures that will provide a template for teams to achieve security, scalability, reliability, and maintainability - Design and implement optimal deployment strategies and custom interfaces for integration with customer systems and technologies - Help customers tackle thorny technical issues by collaborating to create smart, efficient, and effective solutions - Work closely with customers on the installation, configuration, and update/maintenance of on-premises or cloud-hosted installations of our scientific computing and enterprise software applications - Collaborate with product managers, developers, and scientists, to improve the deployment, support, and configuration of our applications Desired Skills / Expertise: - Big data engineering with distributed computing frameworks - Cloud and DevOps automation - App dev with modern tech stacks of Python, ReactJS, and fit-for-purpose database technologies - Experience in directing/building solutions leveraging Machine learning (ML) and AI (Deep learning) - Excellent communication skills with a desire to work in multidisciplinary teams Educational Qualifications: - Bachelors / Masters / Ph.D. in Computer Science or related technical field,

Posted 1 week ago

Apply

6.0 - 10.0 years

0 Lacs

maharashtra

On-site

As a Data Engineer with 6 to 8 years of experience, you will be responsible for designing, developing, and maintaining data pipelines and ETL/ELT processes using Big Data technologies. Your role will involve working extensively with Azure Data Services such as Azure Data Factory, Azure Synapse, Data Lake, and Databricks. You should have a strong knowledge of Big Data ecosystems like Hadoop and Spark, along with hands-on experience in Azure Data Services including ADF, Azure Data Lake, Synapse, and Databricks. Your proficiency in SQL, Python, or Scala for data manipulation and pipeline development will be crucial for this role. Experience with data modeling, data warehousing, and batch/stream processing is required to ensure the quality, integrity, and reliability of data across multiple sources and formats. You will also be expected to handle large-scale data processing using distributed computing tools and optimize the performance of data systems while ensuring security and compliance in the cloud environment. Collaboration with data scientists, analysts, and business stakeholders is an essential part of this role to deliver scalable data solutions. Therefore, understanding of CI/CD, version control (Git), and Agile methodologies will be beneficial in this collaborative environment. If you have a passion for working with data and enjoy solving complex data engineering challenges, this role offers an exciting opportunity to contribute to the development of innovative data solutions.,

Posted 1 week ago

Apply

3.0 - 7.0 years

0 Lacs

pune, maharashtra

On-site

You will be part of the Wolters Kluwer Financial Shared Services (FSS) organization, supporting the Wolters Kluwer Legal & Regulatory Division in Europe. Your role involves developing and maintaining data pipelines and delivering data analytics and insights to drive informed decision-making, enhancing operational efficiency across the organization. Primary tools utilized will be Snowflake and Informatica. Your responsibilities include data engineering tasks such as assembling large, complex datasets, identifying and implementing process improvements, building infrastructure for data extraction and transformation, developing analytical tools for actionable insights, collaborating with stakeholders to support data infrastructure needs, managing data integration using Snowflake and Informatica, and maintaining existing ETL jobs. Effective stakeholder management is crucial, requiring you to foster positive and professional relationships in the workplace to ensure effective collaboration and communication. Continuous learning is encouraged, as you will need to stay updated with the latest tools and technologies available for data analytics and suggest more efficient ways of processing reports. Key skills and experience required for this role include a Bachelor's degree in data science/analytics, finance, accounting, economics, or Engineering in Computer Science or related quantitative field, proficiency in Informatica ETL software and Snowflake dataplatform, strong SQL skills, knowledge of data structures, transformations, and operations, analytical and communication skills, proficiency in English, and experience with cloud platforms such as Azure or AWS will be a plus. Applicants may be required to appear onsite at a Wolters Kluwer office as part of the recruitment process.,

Posted 1 week ago

Apply

6.0 - 10.0 years

0 Lacs

chennai, tamil nadu

On-site

Infiligence is a global technology company with offices in Chennai, India, Hyderabad, and California, USA. We are committed to delivering innovative data solutions to our clients worldwide. We offer a collaborative work environment, competitive compensation, and comprehensive employee benefits. As a Data Engineer at Infiligence, your primary responsibility will be to design, develop, and maintain scalable data pipelines for batch and real-time data processing using Azure (preferred) or AWS cloud platforms. You will build and optimize ETL processes to ensure high-quality, secure, and efficient data flow across systems. Collaborating with cross-functional teams, you will translate business requirements into robust data models and solutions. It is essential to implement data quality, data governance, and data security standards throughout the data lifecycle. You will also develop and maintain documentation, conduct code reviews, unit testing, and peer reviews to ensure code quality and compliance. Troubleshooting, monitoring, and resolving data pipeline and infrastructure issues to minimize business impact will be part of your daily tasks. Moreover, staying updated with new technologies and evaluating their organizational impact will be crucial for the role. To qualify for this position, you must have a minimum of 5-7 years of experience in data engineering, with a strong background in building and managing large-scale data pipelines. Hands-on experience with Azure Data Services (Data Factory, Data Lake, Synapse, Databricks, etc.) is preferred, while experience with AWS data services is also acceptable. Proficiency in SQL, Python, or Scala for data processing and transformation is required. Additionally, experience with data warehousing (e.g., Snowflake, SQL Server, MongoDB) and real-time databases is essential. A strong understanding of data architecture, data ingestion, curation, and consumption patterns is necessary. Familiarity with data quality management, metadata management, data lineage, and data security best practices will be advantageous for the role. Excellent communication skills and the ability to work collaboratively with global teams are also essential. Preferred skills for this role include experience with CI/CD processes and source control for data engineering workflows, knowledge of data observability and self-testing pipelines, as well as exposure to business intelligence and reporting platforms. At Infiligence, we offer comprehensive insurance coverage (health and statutory benefits as per Indian law), a competitive salary in line with industry standards, opportunities for professional growth, including support for technical certifications, and an inclusive and collaborative work culture. If you are interested in this position, please apply with your updated CV and a cover letter to our job URL. Shortlisted candidates will undergo an HR, technical assessment, and interviews. For any queries regarding the position or application process, please contact our Talent team at Infiligence, US or Chennai offices through careers@infiligence.com.,

Posted 1 week ago

Apply

7.0 - 11.0 years

0 Lacs

karnataka

On-site

The Senior Full Stack Software Engineer role entails responsibility for software development, maintenance, monitoring, problem resolution of both front- and back-end systems development solutions within .NET, Relativity or other eDiscovery tools. This position involves participation in projects across all SDLC lifecycles, from project inception to maintenance phase, focusing on analyzing, writing, building, and deploying software solutions of high quality. You will be accountable for creating and maintaining moderate to highly complex solutions addressing the informational and analytical needs of various groups, including data infrastructure, reporting, and applications. Your responsibilities will encompass all project lifecycle phases, such as requirements definition, solution design, application development, and system testing. You are expected to analyze end user data needs and develop user-oriented solutions that interface with existing applications. Documentation maintenance for work processes and procedures, making improvement suggestions, adhering to approved work changes, and providing backup support for projects are part of the role. Effective interaction and partnership across internal business teams, team planning, growth strategy assistance, InfoSec compliance execution, participation in system upgrades, and training on business functionality for system end users are also integral. You will work with minimal supervision, making a range of established decisions, escalating to the Manager when necessary, and providing regular updates. Adaptability, quick learning, and a big picture approach in project work are key attributes expected from you. Minimum Education Requirements: - Bachelor of Science in Computer Science or related field, or comparable business/technical experience. Minimum Experience Requirements: - At least 7-10 years of application development experience encompassing programming, data management, collection, modeling, and interpretation across complex data sets. - Proficiency in front-end technologies such as JavaScript, CSS3, and HTML5, and familiarity with third-party libraries like React Js, Angular, jQuery, and LESS. - Knowledge of server-side programming languages like .Net, Java, Ruby, or Python. - Familiarity with DBMS technology including SQLServer, Oracle, MongoDB, MySQL, and caching mechanisms like Redis, Memcached, and Varnish. - Ability to design, develop, and deploy full-stack web applications using both SQL and NoSQL databases, coach junior developers in the same, rapidly learn new tools, languages, and frameworks, and work with Enterprise Integration Patterns, SOA, Microservices, Stream processing, Event-Driven Architecture, Messaging Protocols, and Data Engineering. - Comfort with software development lifecycle, testing strategies, and working independently or as part of a team. Technical Skills: - Proficient in HTML5, CSS3, JavaScript (ES6+), modern web frontend frameworks, state management libraries, server-side languages, RESTful API design/development, database design/management, caching mechanisms, authentication, and authorization mechanisms like OAuth 2.0 and JWT, Microsoft Windows Server infrastructure, distributed systems, version control systems, CI/CD pipelines, and containerization technologies like Docker and Kubernetes. Consilio's True North Values: - Excellence: Making every client an advocate - Passion: Doing because caring - Collaboration: Winning through teamwork and communication - Agility: Flexing, adapting, and embracing change - People: Valuing, respecting, and investing in teammates - Vision: Creating clarity of purpose and a clear path forward,

Posted 1 week ago

Apply

13.0 - 20.0 years

30 - 45 Lacs

Pune

Hybrid

Hi, Wishes from GSN!!! Pleasure connecting with you!!! We been into Corporate Search Services for Identifying & Bringing in Stellar Talented Professionals for our reputed IT / Non-IT clients in India. We have been successfully providing results to various potential needs of our clients for the last 20 years. At present, GSN is hiring DATA ENGINEERING - Solution Architect for one of our leading MNC client. PFB the details for your better understanding : 1. WORK LOCATION : PUNE 2. Job Role: DATA ENGINEERING - Solution Architect 3. EXPERIENCE : 13+ yrs 4. CTC Range: Rs. 35 LPA to Rs. 50 LPA 5. Work Type : WFO Hybrid ****** Looking for SHORT JOINERS ****** Job Description : Who are we looking for : Architectural Vision & Strategy: Define and articulate the technical vision, strategy and roadmap for Big Data, data streaming, and NoSQL solutions , aligning with overall enterprise architecture and business goals. Required Skills : 13+ years of progressive EXP in software development, data engineering and solution architecture roles, with a strong focus on large-scale distributed systems. Expertise in Big Data Technologies: Apache Spark: Deep expertise in Spark architecture, Spark SQL, Spark Streaming, performance tuning, and optimization techniques. Experience with data processing paradigms (batch and real-time). Hadoop Ecosystem: Strong understanding of HDFS, YARN, Hive and other related Hadoop components . Real-time Data Streaming: Apache Kafka: Expert-level knowledge of Kafka architecture, topics, partitions, producers, consumers, Kafka Streams, KSQL, and best practices for high-throughput, low-latency data pipelines. NoSQL Databases: Couchbase: In-depth experience with Couchbase OR MongoDB OR Cassandra), including data modeling, indexing, querying (N1QL), replication, scaling, and operational best practices. API Design & Development: Extensive experience in designing and implementing robust, scalable and secure APIs (RESTful, GraphQL) for data access and integration. Programming & Code Review: Hands-on coding proficiency in at least one relevant language ( Python, Scala, Java ) with a preference for Python and/or Scala for data engineering tasks. Proven experience in leading and performing code reviews, ensuring code quality, performance, and adherence to architectural guidelines. Cloud Platforms: Extensive EXP in designing and implementing solutions on at least one major cloud platform ( AWS, Azure, GCP ), leveraging their Big Data, streaming, and compute services . Database Fundamentals: Solid understanding of relational database concepts, SQL, and data warehousing principles. System Design & Architecture Patterns: Deep knowledge of various architectural patterns (e.g., Microservices, Event-Driven Architecture, Lambda/Kappa Architecture, Data Mesh ) and their application in data solutions. DevOps & CI/CD: Familiarity with DevOps principles, CI/CD pipelines, infrastructure as code (IaC) and automated deployment strategies for data platforms . ****** Looking for SHORT JOINERS ****** Interested, don't hesitate to call NAK @ 9840035825 / 9244912300 for IMMEDIATE response. Best, ANANTH | GSN | Google review : https://g.co/kgs/UAsF9W

Posted 1 week ago

Apply

7.0 - 12.0 years

25 - 30 Lacs

Hyderabad, Chennai, Bengaluru

Hybrid

Develop and maintain data pipelines, ETL/ELT processes, and workflows to ensure the seamless integration and transformation of data. Architect, implement, and optimize scalable data solutions. Required Candidate profile Work closely with data scientists, analysts, and business stakeholders to understand requirements and deliver actionable insights. Partner with cloud architects and DevOps teams

Posted 1 week ago

Apply

6.0 - 10.0 years

30 - 35 Lacs

Bengaluru

Work from Office

Role Description: As a Data Engineering Lead, you will play a crucial role in overseeing the design, development, and maintenance of our organization's data architecture and infrastructure. You will be responsible for designing and developing the architecture for the data platform that ensures the efficient and effective processing of large volumes of data, enabling the business to make informed decisions based on reliable and high-quality data. The ideal candidate will have a strong background in data engineering, excellent leadership skills, and a proven track record of successfully managing complex data projects. Responsibilities : Data Architecture and Design : Design and implement scalable and efficient data architectures to support the organization's data processing needs Work closely with cross-functional teams to understand data requirements and ensure that data solutions align with business objectives ETL Development : Oversee the development of robust ETL processes to extract, transform, and load data from various sources into the data warehouse Ensure data quality and integrity throughout the ETL process, implementing best practices for data cleansing and validation Big Data Technology - Stay abreast of emerging trends and technologies in big data and analytics, and assess their applicability to the organization's data strategy Implement and optimize big data technologies to process and analyze large datasets efficiently Cloud Integration: Collaborate with the IT infrastructure team to integrate data engineering solutions with cloud platforms, ensuring scalability, security, and performance. Performance Monitoring and Optimization : Implement monitoring tools and processes to track the performance of data pipelines and proactively address any issues Optimize data processing. Documentation : Maintain comprehensive documentation for data engineering processes, data models, and system architecture Ensure that team members follow documentation standards and best practices. Collaboration and Communication : Collaborate with data scientists, analysts, and other stakeholders to understand their data needs and deliver solutions that meet those requirements Communicate effectively with technical and non-technical stakeholders, providing updates on project status, challenges, and opportunities. Qualifications: Bachelor's or Master's degree in Computer Science, Information Technology, or a related field. 6-8 years of professional experience in data engineering In-depth knowledge of data modeling, ETL processes, and data warehousing. In-depth knowledge of building the data warehouse using Snowflake Should have experience in data ingestion, data lakes, data mesh and data governance Must have experience in Python programming Strong understanding of big data technologies and frameworks, such as Hadoop, Spark, and Kafka. Experience with cloud platforms, such as AWS, Azure, or Google Cloud. Familiarity with database systems like SQL, NoSQL, and data pipeline orchestration tools .Excellent problem-solving and analytical skills .Strong communication and interpersonal skills. Proven ability to work collaboratively in a fast-paced, dynamic environment.

Posted 1 week ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies