Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
About the Team The Data Science team at Navi plays a central role in building intelligent solutions that power our products and drive business impact. We work across key domains such as Lending, Collections, KYC, UPI Growth and Ads recommendation, applying advanced machine learning/deep learning/GenAI techniques to solve high-impact challenges. Our work involves a diverse range of data types—including text, image, and tabular data—and we closely collaborate with cross-functional teams like Product, Business, Engineering, etc. to deliver end-to-end solutions. About the Role As a Data Scientist 2 at Navi, you’ll be an integral part of a team that’s building scalable and efficient solutions across lending, insurance, investments, and UPI. You won’t just be solving predefined problems - you’ll help define them, working hands-on across a variety of domains. In this role, you would be expected to lead projects and create real business impact for Navi. You’ll have the opportunity to apply cutting-edge techniques to real-world challenges, while collaborating closely with cross-functional teams to deliver measurable business impact. This isn’t just a role - it’s a chance to contribute to the future of fintech through innovative, high-ownership work that makes a visible difference. What We Expect From You Design, develop, and deploy end-to-end data science solutions that address complex business problems across lending, insurance, investments, and payments. Collaborate with cross-functional teams including product, engineering , and business to identify opportunities for data-driven impact. Work with diverse data modalities such as tabular data, text, audio, image, and video to build predictive models and intelligent systems. Continuously explore and implement state-of-the-art techniques in machine learning, deep learning, NLP, computer vision, and Generative AI. Drive experimentation and rapid prototyping to validate hypotheses and scale successful models to production. Monitor, evaluate, and refine model performance over time, ensuring reliability and alignment with business goals. Contribute to building a strong data science culture by sharing best practices, mentoring peers, and actively participating in knowledge-sharing sessions. Must Haves Bachelor's or Master's in Engineering or equivalent. 2+ years of Data Science/Machine Learning experience. Strong knowledge in statistics, tree-based techniques (e.g., Random Forests, XGBoost), machine learning (e.g., MLP, SVM), inference, hypothesis testing, simulations, and optimizations. Bonus: Experience with deep learning techniques. Strong Python programming skills and experience in building Data Pipelines in PySpark , along with feature engineering. Proficiency in pandas, scikit-learn, Scala, SQL, and familiarity with TensorFlow/PyTorch . Understanding of DevOps/MLOps, including creating Docker containers and deploying to production (using platforms like Databricks or Kubernetes). Inside Navi We are shaping the future of financial services for a billion Indians through products that are simple, accessible, and affordable. From Personal & Home Loans to UPI, Insurance, Mutual Funds, and Gold — we’re building tech-first solutions that work at scale, with a strong customer-first approach. Founded by Sachin Bansal & Ankit Agarwal in 2018, we are one of India’s fastest-growing financial services organisations. But we’re just getting started! Our Culture The Navi DNA Ambition. Perseverance. Self-awareness. Ownership. Integrity. We’re looking for people who dream big when it comes to innovation. At Navi, you’ll be empowered with the right mechanisms to work in a dynamic team that builds and improves innovative solutions. If you’re driven to deliver real value to customers, no matter the challenge, this is the place for you. We chase excellence by uplifting each other—and that starts with every one of us. Why You'll Thrive at Navi At Navi, it’s about how you think, build, and grow. You’ll thrive here if: You’re impact-driven : You take ownership, build boldly, and care about making a real difference. You strive for excellence : Good isn’t good enough. You bring focus, precision, and a passion for quality. You embrace change : You adapt quickly, move fast, and always put the customer first.
Posted 1 week ago
0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Required Skills : Python Processing of large quantities of text documents Extraction of text from Office and PDF documents Input json to an API, output json to an API Nifi (or similar technology compatible) Basic understanding of AI/ML concepts Database/Search engine/SOLR skills SQL – build queries to analyze, create and update databases Understands the basics of hybrid search Experience working with terabytes (TB) of data Basic OpenML/Python/Azure knowledge Scripting knowledge/experience in an Azure environment to automate Cloud systems experience related to search and databases Platforms: DataBricks Snowflake ESRI ArcGIS / SDE New GenAI app being developed Scope of work : 1. Ingest TB of data from multiple sources identified by the Ingestion Lead 2. Optimize data pipelines to improve on data processing, speed, and data availability 4. Make data available for end users from several hundred LAN and SharePoint areas 5. Monitor data pipelines daily and fix issues related to scripts, platforms, and ingestion 6. Work closely with the Ingestion Lead & Vendor on issues related to data ingestion Technical Skills demonstrated: 1. SOLR - Backend database 2. Nifi - Data movement 3. Pyspark - Data Processing 4. Hive & Oozie - For jobs monitoring 5. Querying - SQL, HQl and SOLR querying 6. SQL 7. Python
Posted 1 week ago
8.0 years
0 Lacs
Ahmedabad, Gujarat, India
On-site
🚀 We’re Hiring: Sr. Data Engineer (AWS/Azure) 📍 Location: Ahmedabad, Gujarat (Hybrid/Onsite) 📅 Experience: 4–8 Years 🕒 Type: Full-Time Are you passionate about designing scalable, cloud-native data solutions? Ready to work on cutting-edge tech with a global engineering team? Join Simform — a top-tier digital engineering partner for AWS, Microsoft, Google Cloud, and Databricks — and help us power the data behind next-gen digital products. 🔍 What You’ll Do As a Senior Data Engineer , you’ll design and build high-performance data pipelines using AWS and Azure services. You'll work closely with ML engineers, data scientists, and product teams to develop robust data infrastructure that supports real-time analytics, large-scale processing, and machine learning workflows. 🛠️ Tech You’ll Work With Cloud: AWS Glue, S3, Redshift, Kinesis, Lambda / Azure Data Factory, Synapse, Databricks, Microsoft Fabric Big Data & Streaming: Spark, Kafka, Flink, Airflow Databases: PostgreSQL, MongoDB, MySQL, SQL Server, Cassandra, Neptune Data Ops: ETL/ELT, data lake/lakehouse design, real-time + batch pipelines ✅ What We’re Looking For Strong hands-on experience with end-to-end data pipelines on AWS and/or Azure Proficiency in ETL/ELT , data modelling, and optimizing large-scale datasets (100GB+) Solid foundation in distributed data processing and data integration Bonus: Experience with ML pipeline integration , CI/CD for data , or data observability tools 💼 Why Join Simform? 🌱 A growth-driven, engineering-first culture 🤝 Flat hierarchy & transparent leadership 🧠 Learning & certification sponsorship 🧘 Free health insurance & flexible work options 🎮 Game zone, free snacks, subsidized lunch 🌍 Global exposure across North America & Europe If you’re ready to engineer real impact with cloud data solutions, let’s connect! 📩 Apply now or refer someone great! 👉🏿 yash.b@simformsolutions.com
Posted 1 week ago
130.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Job Description Associate Director, Data and Analytics Strategy & Architecture – Enterprise Data Enablement THE OPPORTUNITY Based in Hyderabad, join a global healthcare biopharma company and be part of a 130-year legacy of success backed by ethical integrity, forward momentum, and an inspiring mission to achieve new milestones in global healthcare. Lead an Organization driven by digital technology and data-backed approaches that supports a diversified portfolio of prescription medicines, vaccines, and animal health products. Drive innovation and execution excellence. Be the leaders who have a passion for using data, analytics, and insights to drive decision-making, which will allow us to tackle some of the world's greatest health threats. Our Technology centers focus on creating a space where teams can come together to deliver business solutions that save and improve lives. AN integral part of our company IT operating model, Tech centers are globally distributed locations where each IT division has employees to enable our digital transformation journey and drive business outcomes. These locations, in addition to the other sites, are essential to supporting our business and strategy. A focused group of leaders in each tech center helps to ensure we can manage and improve each location, from investing in growth, success, and well-being of our people, to making sure colleagues from each IT division feel a sense of belonging to managing critical emergencies. And together, we must leverage the strength of our team to collaborate globally to optimize connections and share best practices across the Tech Centers. Role Overview As a Lead Technical Data and Analytics Architect with a primary focus on Enterprise Data Enablement and Data Governance, you will play a pivotal leadership role in shaping the future of our company enterprise data enablement and governance initiatives. This position combines strategic technology leadership with hands-on technical expertise. You will be supporting our Discover product line, which encompasses the Enterprise Data Marketplace, Data Catalog, and Enterprise Data Access Control products. This role is pivotal in understanding the current architecture, adoption patterns, and product strategy while helping to design the architecture for the next generation of Discover. You will create and implement strategic frameworks, ensure their adoption within product teams, and oversee the consistent management of technologies. You will work closely with the product team to establish and govern the future architecture, ensuring it evolves beyond traditional data products to include AI models, visualizations, insights assets, and more. You will play a key role in driving innovation, modularity, and scalability within the Discover ecosystem, aligning with the organization's strategic vision. What Will You Do In The Role Strategic Leadership Develop and maintain a cohesive Data Enablement architecture vision, aligning with our company's business objectives and industry trends. Provide leadership to a team of product owners and engineers in our Discover Product line, mentoring and guiding them to achieve collective goals and deliverables. Foster a collaborative environment where innovation and best practices thrive. Integration and Innovation Design and implement architectural solutions to enable seamless integration between Enterprise Data Marketplace, Data Catalog, and Enterprise Data Access Control products. Enhance API usage and drive the transition to a microservice-based architecture for greater modularity and scalability. Support the integration of Collibra and Immuta platforms with compute engines like Glue, Trino Starburst, and Databricks to optimize Discover’s capabilities. Technical Leadership and Collaboration Collaborate with cross-functional teams, including engineering, product management, and other stakeholders, to align on architecture strategy and implementation. Partner with the product team to define roadmaps and ensure architectural alignment with the organization's goals. Act as a trusted advisor, providing technical leadership and driving best practices for architectural governance. Governance and Security Ensure all architectural designs adhere to organizational policies, data governance requirements, and security standards. Evolve data governance practices to accommodate diverse assets, including AI models and visualizations, alongside traditional data products. Optimization and Future-Readiness Identify opportunities for system optimization, modernization, and cost-efficiency. Lead initiatives to future-proof the architecture, supporting scalability for increasing demands across data products and advanced analytics. Framework Development and Governance Create capability and technology maps for Data Enablement and Governance, reference architectures, innovation trend maps, and architecture blueprints and patterns. Ensure the consistent application of frameworks across product teams. Hands-on Contribution Actively participate in technical problem-solving, proof-of-concept development, and implementation activities. Provide hands-on technical leadership to support your team and deliver high-value outcomes. Cross-functional Collaboration Partner with enterprise and product architects to ensure alignment and synergy across the organization. Engage with stakeholders to align architectural decisions with broader business goals. Collaborate with internal Strategy and Architecture team Architecture lead and Architects to ensure the smooth integration of Data Enablement Technologies with other Data and Analytics eco system products What Should You Have Hands-on experience with platforms like Collibra, Immuta, and Databricks, and deep knowledge of data governance and access control frameworks. Strong understanding of architectural principles, API integration strategies, and microservice-based design Proficiency in designing modular, scalable architectures that align with data product and data mesh principles. Expertise in supporting diverse asset types, including AI models, visualizations, and insights assets, within enterprise ecosystems. Knowledge of cloud platforms (AWS preferred) and containerization technologies (Docker, Kubernetes). Proven ability to align technical solutions with business objectives and strategic goals. Strong communication skills, with the ability to engage and influence technical and non-technical stakeholders. Exceptional problem-solving and analytical skills, with a focus on practical, future-ready solutions. Self-driven and adaptable, capable of managing multiple priorities in a fast-paced environment. Our technology teams operate as business partners, proposing ideas and innovative solutions that enable new organizational capabilities. We collaborate internationally to deliver services and solutions that help everyone be more productive and enable innovation Who We Are We are known as Merck & Co., Inc., Rahway, New Jersey, USA in the United States and Canada and MSD everywhere else. For more than a century, we have been inventing for life, bringing forward medicines and vaccines for many of the world's most challenging diseases. Today, our company continues to be at the forefront of research to deliver innovative health solutions and advance the prevention and treatment of diseases that threaten people and animals around the world. What We Look For Imagine getting up in the morning for a job as important as helping to save and improve lives around the world. Here, you have that opportunity. You can put your empathy, creativity, digital mastery, or scientific genius to work in collaboration with a diverse group of colleagues who pursue and bring hope to countless people who are battling some of the most challenging diseases of our time. Our team is constantly evolving, so if you are intellectually curious, join us—and start making your impact today. Current Employees apply HERE Current Contingent Workers apply HERE Search Firm Representatives Please Read Carefully Merck & Co., Inc., Rahway, NJ, USA, also known as Merck Sharp & Dohme LLC, Rahway, NJ, USA, does not accept unsolicited assistance from search firms for employment opportunities. All CVs / resumes submitted by search firms to any employee at our company without a valid written search agreement in place for this position will be deemed the sole property of our company. No fee will be paid in the event a candidate is hired by our company as a result of an agency referral where no pre-existing agreement is in place. Where agency agreements are in place, introductions are position specific. Please, no phone calls or emails. Employee Status Regular Relocation VISA Sponsorship Travel Requirements Flexible Work Arrangements Hybrid Shift Valid Driving License Hazardous Material(s) Required Skills Business Enterprise Architecture (BEA), Business Process Modeling, Data Modeling, Emerging Technologies, Requirements Management, Solution Architecture, Stakeholder Relationship Management, Strategic Planning, System Designs Preferred Skills Job Posting End Date 07/30/2025 A job posting is effective until 11 59 59PM on the day BEFORE the listed job posting end date. Please ensure you apply to a job posting no later than the day BEFORE the job posting end date. Requisition ID R345606
Posted 1 week ago
5.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Position Overview We are seeking an experienced Solution Architect to join our dynamic technology team. The ideal candidate will be responsible for designing and implementing comprehensive software solutions that align with business objectives while ensuring scalability, security, and performance. This role requires a strategic thinker with hands-on technical expertise who can bridge the gap between business requirements and technical implementation. Key Responsibilities Solution Design & Architecture Design end-to-end software solutions that meet business requirements and technical specifications Create detailed architectural blueprints, technical documentation, and system integration plans Evaluate and recommend appropriate technologies, frameworks, and platforms for project implementations Ensure architectural compliance with industry best practices, security standards, and organizational guidelines Technical Leadership & Team Management Provide technical guidance and mentorship to development teams throughout the project lifecycle Manage and lead small development teams (3-8 members) with hands-on involvement in daily operations Conduct thorough code reviews to ensure code quality, adherence to standards, and knowledge sharing Lead architectural reviews and design discussions with stakeholders and development teams Oversee base framework development and establish reusable components and libraries Implement and manage automation tools to improve development efficiency and code quality Collaborate with cross-functional teams including DevOps, QA, and Product Management to ensure seamless delivery Stay current with emerging technologies and industry trends to drive innovation within the organization Cloud & Infrastructure Planning Design cloud-native solutions leveraging modern cloud platforms and services Optimize system performance, scalability, and cost-effectiveness in cloud environments Implement best practices for cloud security, monitoring, and disaster recovery DevOps Integration Collaborate with DevOps teams to establish CI/CD pipelines and deployment strategies Ensure solutions are designed with automation, monitoring, and operational excellence in mind Support the implementation of Infrastructure as Code and containerization strategies Required Qualifications Experience & Education 5+ years of experience in software solution architecture and design 10-20 years of experience in Software Development Proven track record of delivering complex, enterprise-level software solutions Team Management Experience: Hands-on experience managing small development teams (3-8 members) Experience in conducting code reviews, establishing coding standards, and quality assurance processes Background in base framework development and creating reusable component libraries Experience implementing and managing automation tools for development workflows Technical Expertise Cloud Platform Expertise: Hands-on experience with at least one major cloud platform (AWS, Azure, Google Cloud Platform) Solution Architecture Certification: Must hold at least one recognized Solution Architect certification (AWS Solutions Architect, Azure Solutions Architect Expert, Google Professional Cloud Architect, or equivalent) Full Stack Development: Proficiency in full-stack application development using one or more of the following technology stacks: Java-based technologies (Spring Boot, Spring Framework, Hibernate, Maven/Gradle) Python-based technologies (Django, Flask, FastAPI, SQLAlchemy) Modern UI Frameworks: Expertise in at least one modern frontend framework (React, Vue.js, or Angular) DevOps Toolchain: Comprehensive understanding of DevOps tools and practices including: CI/CD platforms (Jenkins, GitLab CI, Azure DevOps, GitHub Actions) Containerization (Docker, Kubernetes) Infrastructure as Code (Terraform, CloudFormation, ARM templates) Monitoring and logging tools (Prometheus, Grafana, ELK Stack) Version control systems (Git, GitFlow) Technical Skills Strong knowledge of microservices architecture, API design, and distributed systems Experience with database technologies (both SQL and NoSQL) Understanding of security best practices and compliance requirements Knowledge of software design patterns and architectural principles Experience with agile development methodologies AI/ML/LLM Application Development: Strong understanding of AI/ML/LLM-based application development processes including: AI Agent development and orchestration Retrieval-Augmented Generation (RAG) systems and implementation Model Context Protocol (MCP) integration and development Machine learning model deployment and integration strategies Understanding of AI/ML pipelines and MLOps practices RequirementsPreferred Qualifications & Added Advantages Preferred Qualifications Multiple cloud platform certifications Experience with serverless computing and event-driven architectures Understanding of enterprise integration patterns and ESB technologies Previous experience in a technical leadership or mentoring role Significant Added Advantages AI/ML/LLM Implementation Experience: Hands-on experience in building and deploying: AI Agent systems and multi-agent orchestration platforms Retrieval-Augmented Generation (RAG) applications and vector databases Model Context Protocol (MCP) implementations and integrations LLM-powered applications and prompt engineering AI/ML model fine-tuning and optimization Data Engineering Expertise: Experience with data engineering stacks including: Data pipeline development and orchestration (Apache Airflow, Prefect, Dagster) Big data processing frameworks (Apache Spark, Kafka, Flink) Data warehouse and lake technologies (Snowflake, Databricks, AWS Redshift) ETL/ELT processes and data transformation tools (dbt, Apache NiFi) Stream processing and real-time analytics platforms Data governance and quality management tools Benefits Key Competencies Strong analytical and problem-solving abilities Excellent communication and presentation skills Ability to translate complex technical concepts to non-technical stakeholders Leadership skills with the ability to influence and guide technical teams Adaptability to rapidly changing technology landscapes Strong project management and organizational skills
Posted 1 week ago
4.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
About Gartner IT Join a world-class team of skilled engineers who build creative digital solutions to support our colleagues and clients. We make a broad organizational impact by delivering cutting-edge technology solutions that power Gartner. Gartner IT values its culture of nonstop innovation, an outcome-driven approach to success, and the notion that great ideas can come from anyone on the team. About The Role Gartner is seeking an Advanced Data Engineer specializing in data modeling and reporting with Azure Analysis Services and Power BI. As a key member of the team, you will contribute to the development and support of Gartner’s Enterprise Data Warehouse and a variety of data products. This role involves integrating data from both internal and external sources using diverse ingestion APIs. You will have the opportunity to work with a broad range of data technologies, focusing on building and optimizing data pipelines, as well as supporting, maintaining, and enhancing existing business intelligence solutions. What You Will Do Develop, manage, and optimize enterprise data models within Azure Analysis Services, including configuration, scaling, and security management Design and build tabular data models in Azure Analysis Services for seamless integration with Power BI Write efficient SQL queries and DAX (Data Analysis Expressions) to support robust data models, reports, and dashboards Tune and optimize data models and queries for maximum performance and efficient data retrieval Design, build, and automate data pipelines and applications to support data scientists and business users with their reporting and analytics needs Collaborate with a team of Data Engineers to support and enhance the Azure Synapse Enterprise Data Warehouse environment What You Will Need 2–4 years of hands-on experience developing enterprise data models in Azure Analysis Services Strong expertise in designing and developing tabular models using Power BI and SQL Server Data Tools (SSDT) Advanced proficiency in DAX for data analysis and SQL for data manipulation and querying Proven experience creating interactive Power BI dashboards and reports for business analytics Deep understanding of relational database systems and advanced SQL skills Experience with T-SQL, ETL processes, and Azure Data Factory is highly desirable Solid understanding of cloud computing concepts and experience with Azure services such as Azure Data Factory, Azure Blob Storage, and Azure Active Directory Nice To Have Experience with version control systems (e.g., Git, Subversion) Familiarity with programming languages such as Python or Java Knowledge of various database technologies (NoSQL, Document, Graph databases, etc.) Experience with Data Intelligence platforms like Databricks Who You Are Effective time management skills and ability to meet deadlines Excellent communications skills interacting with technical and business audience’s Excellent organization, multitasking, and prioritization skills Must possess a willingness and aptitude to embrace new technologies/ideas and master concepts rapidly. Intellectual curiosity, passion for technology and keeping up with new trends Delivering project work on-time within budget with high quality Don’t meet every single requirement? We encourage you to apply anyway. You might just be the right candidate for this, or other roles. Who are we? At Gartner, Inc. (NYSE:IT), we guide the leaders who shape the world. Our mission relies on expert analysis and bold ideas to deliver actionable, objective insight, helping enterprise leaders and their teams succeed with their mission-critical priorities. Since our founding in 1979, we’ve grown to more than 21,000 associates globally who support ~14,000 client enterprises in ~90 countries and territories. We do important, interesting and substantive work that matters. That’s why we hire associates with the intellectual curiosity, energy and drive to want to make a difference. The bar is unapologetically high. So is the impact you can have here. What makes Gartner a great place to work? Our sustained success creates limitless opportunities for you to grow professionally and flourish personally. We have a vast, virtually untapped market potential ahead of us, providing you with an exciting trajectory long into the future. How far you go is driven by your passion and performance. We hire remarkable people who collaborate and win as a team. Together, our singular, unifying goal is to deliver results for our clients. Our teams are inclusive and composed of individuals from different geographies, cultures, religions, ethnicities, races, genders, sexual orientations, abilities and generations. We invest in great leaders who bring out the best in you and the company, enabling us to multiply our impact and results. This is why, year after year, we are recognized worldwide as a great place to work . What do we offer? Gartner offers world-class benefits, highly competitive compensation and disproportionate rewards for top performers. In our hybrid work environment, we provide the flexibility and support for you to thrive — working virtually when it's productive to do so and getting together with colleagues in a vibrant community that is purposeful, engaging and inspiring. Ready to grow your career with Gartner? Join us. The policy of Gartner is to provide equal employment opportunities to all applicants and employees without regard to race, color, creed, religion, sex, sexual orientation, gender identity, marital status, citizenship status, age, national origin, ancestry, disability, veteran status, or any other legally protected status and to seek to advance the principles of equal employment opportunity. Gartner is committed to being an Equal Opportunity Employer and offers opportunities to all job seekers, including job seekers with disabilities. If you are a qualified individual with a disability or a disabled veteran, you may request a reasonable accommodation if you are unable or limited in your ability to use or access the Company’s career webpage as a result of your disability. You may request reasonable accommodations by calling Human Resources at +1 (203) 964-0096 or by sending an email to ApplicantAccommodations@gartner.com . Job Requisition ID:101546 By submitting your information and application, you confirm that you have read and agree to the country or regional recruitment notice linked below applicable to your place of residence. Gartner Applicant Privacy Link: https://jobs.gartner.com/applicant-privacy-policy For efficient navigation through the application, please only use the back button within the application, not the back arrow within your browser.
Posted 1 week ago
5.0 - 10.0 years
25 - 40 Lacs
Bengaluru
Hybrid
Dear Candidate, GyanSys is looking for Sr. Python Developers for our overseas customers consulting Projects based in Americas/Europe/APAC region. Please apply for the job role or Share your CV directly to kiran.devaraj@gyansys.com / Call @ 8867163603 to discuss the fitment in detail. Designation: Sr/Lead/Principal Consultant Experience: 6+ Yrs - relevant Location: Bangalore , ITPL Notice Period: Immediate or 30 days max Job Description: With 5+ years of application development experience with backend technologies Like Python. Candidate should be able to understand and interpretate business requirements into clearly articulated technology solution. Experience working with RDBMS database models in Cloud. Basic knowledge of cloud components like Databrick, ADLS & OAuth/Secrets will be added advantage. Understanding different DB's like Clic khouse, MS SQL, Postgres, Snowflake will be added advantage. Experience working with Python Tornado and FastAPI frameworks. In-depth understanding of RESTful APIs , token authentications, compression. Good problem-solving skills and Team spirit Experience with version control and issue tracking systems: Bit Bucket, Git CLI, Jira. Strong debugging, problem solving and investigative skills. Ability to assimilate disparate information (log files, error messages etc.) and pursue leads to find root cause problems. Ability to work in an Agile environment at scale. Excellent verbal and written communication skills Knowledge of software development lifecycle, DevOps (build, continuous integration, deployment tools) and standard methodologies Experience in working source control management systems like git, Bitbucket and managing packages using private registries like Jfrog. Understanding of fundamental design principles behind a scalable application. Tech Stack: Full Stack, Python, Go, RestAPIs, Middleware - database middleware, application server middleware, message-oriented middleware, web middleware, and transaction-processing monitors, Docker, Kubernetes Data Structures, design patterns Microservices & REST APIs FastAPI/Django/Tornado Databases & SQL Postgres, Clickhouse, MSSQL, Mongo DB, Snowflake Caching & Queuing Kafka, Redis Orchestrator tools Good to have: MLOps expertise for OnPrem/Cloud applications Interest and exposure in Generative AI Understand and practice the LLMOPs. Kinldy apply only if your profile fits the above pre-requisites. Also, Please share this job post in your acquaintances as well.
Posted 1 week ago
4.0 years
0 Lacs
Indore, Madhya Pradesh, India
Remote
🔥 Senior Data engineer Python - WFH THIS IS FULLY REMOTE WORKING OPPORTUNITY . WE NEED IMMEDIATE JOINERS or someone who can join in less than 1 month. If you are interested and fulfill the below mentioned criteria then please share below information. 1. EMAIL ID 2. PHONE NUMBER 3. YEARS OF RELEVANT EXPERIENCE. 4. UPDATED RESUME. 5. CCTC/ECTC 6. Notice period - Must to Have Technical Skillsets 4-5 years of experience: ● Python ● Databricks ● Spark ● Azure ● Exposure on data engineering and service development Good to Have Technical Skillsets ● GCP, AWS ● Kubernetes ● Pandas ************************** Pls note that we have a strong vetting process and tough interview rounds. we will only consider candidates skills with 5+ yrs of solid experience in Python Service Development experience, SQL, Pandas, Airflow, with Azure or other cloud. You will be thoroughly tested on these skills If you lack these skills then pls don't apply to save your time. If you are absolutely sure about these skills then send me above details
Posted 1 week ago
0 years
0 Lacs
Pune, Maharashtra, India
On-site
Design, deploy and manage Azure infrastructure including virtual machines, storage accounts, virtual networks, and other resources. Assist teams by deploying applications to AKS clusters using containerization technologies such as Docker and Kubernetes, Container Registry, etc.. Familiarity with the Azure CLI and ability to use PowerShell say to scan Azure resources, make modifications, spit out a report or a dump, etc.. Setting up a 2 or 3 tier application on Azure. VMs, web apps, load balancers, proxies, etc.… Well versed with security, AD, MI SPN, firewalls Networking: NSGs, VNETs, private end points, express routes, Bastion, etc.… Familiarity with a scripting language like Python for automation. Leveraging Terraform (or Bicep) for automating infrastructure deployment. Cost tracking, analysis, reporting, and management at the resource groups level. Experience with Azure DevOps Experience with Azure monitor Strong hands-on experience in ADF, Linked Service/IR (Self-hosted/managed), LogicApp, ServiceBus, Databricks, SQL Server Strong understanding of Python, Spark, and SQL (Nice to have) Ability to work in fast paced environments as we have tight SLAs for tickets. Self-driven and should possess exploratory mindset as the work requires a good amount of research (within and outside the application)
Posted 1 week ago
5.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
About The Role: Eucloid is looking for a senior Data Engineer with hands-on expertise in Databricks to join our Data Platform team supporting various business applications. The ideal candidate will support development of data infrastructure on Databricks for our clients by participating in activities which may include starting from up- stream and down-stream technology selection to designing and building of different components. Candidate will also involve in projects like integrating data from various sources, managing big data pipelines that are easily accessible with optimized performance of overall ecosystem. The ideal candidate is an experienced data wrangler who will support our software developers, database architects and data analysts on business initiatives. You must be self-directed and comfortable supporting the data needs of cross-functional teams, systems, and technical solutions. Location: Chennai Qualifications: B.Tech/BS degree in Computer Science, Computer Engineering, Statistics, or other Engineering disciplines Min. 5 years of Professional work Experience, with 1+ years of hands-on experience with Databricks Highly proficient in SQL & Data model (conceptual and logical) concepts Highly proficient with Python & Spark (3+ year) Knowledge of Distributed computing and cloud databases like Redshift, Big query etc. 2+ years of Hands-on experience with one of the top cloud platforms - AWS/GCP/Azure. Experience with Modern Data stack tools like Airflow, Terraform, dbt, Glue, Data proc etc. Exposure to Hadoop& Shell scripting a plus Min 2 years, Databricks 1 year desirable, Python & Spark 1+ years, remove data model, SQL only, any cloud exp 1+ year Responsibilities Design, implementation, and improvement of processes & automation of Data infrastructure Tuning of Data pipelines for reliability & performance Building tools and scripts to develop, monitor, troubleshoot ETL’s Perform scalability, latency, and availability tests on a regular basis. Perform code reviews and QA data imported by various processes. Investigate, analyze, correct and document reported data defects. Create and maintain technical specification documentation. Eucloid offers a high growth path along with great compensation, which is among the best in the industry. Please reach out to chandershekhar.verma@eucloid.com if you want to apply.
Posted 1 week ago
0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
About the company: We are on a mission to make it viable to extract value from all data in the world — so humanity can capture every insight, cure, invention, and opportunity.Traditional processing solutions based on CPUs and today’s software architectures cannot handle the complexity and volume of data, doubling every two years, with unstructured data now accounting for 90% of all data created. The surge of GenAI and its dependence on huge volumes of unstructured data is compounding the processing challenge. We are creating a new data processing standard for the accelerated computing era to overcome these performance, cost and scalability limitations. About the role: You will play a critical role in ensuring our customers achieve meaningful outcomes using our advanced analytics engine built for Lakehouse platforms. You will partner closely with clients to understand their goals, drive adoption of our product, and deliver long-term value through ongoing engagement, technical enablement, and strategic guidance. This is a hands-on role that requires a deep understanding of data engineering, analytics, Lakehouse architectures (e.g., Delta Lake, Apache Iceberg), and related cloud technologies. You will serve as a trusted advisor to your customers, helping them to integrate, optimize, and expand their use of our product within their data ecosystem. Key Responsibilities: ● Serve as the primary post-sales point of contact and advocate for customers, ensuring a seamless onboarding and implementation process. ● Build deep relationships with technical and business stakeholders to align product capabilities with customer goals. ● Drive product adoption and usage, delivering measurable business outcomes that demonstrate the value of our product. ● Proactively identify risks to customer success and create strategies to mitigate churn while maximizing growth opportunities. ● Partner with sales, product, and engineering teams to communicate customer feedback and drive continuous product improvement. ● Guide clients in co-existing and integrating our product alongside other analytics engines (e.g., Databricks, Snowflake, Dremio) within Lakehouse environments. ● Conduct product walkthroughs, knowledge transfer sessions, and enablement workshops for technical and non-technical teams. ● Support the development of customer reference architectures and case studies based on successful deployments. ● Collaborate with system integrators, consultants, and other partners to ensure joint success in complex enterprise environments. ● Mentor junior team members and contribute to the overall growth of the Customer Success team. ● Handling escalations and production issues ● Funneling improvements from bugs encountered in customer environment to bug fixes, features and supportability enhancements in the product. Qualifications ● Proven experience in customer-facing roles such as Customer Success, Solutions Engineering, Technical Account Management, or Post-Sales Consulting. ● Strong technical acumen in Lakehouse platforms, data engineering, analytics, SQL, and AI/ML. ● Hands-on expertise in public cloud platforms (AWS, Azure, GCP) and common data tools (Spark, Python, Scala, Java). ● Ability to clearly communicate complex technical concepts to both technical and business audiences. ● Experience with onboarding, driving adoption, and demonstrating ROI in enterprise software environments. ● Excellent collaboration and stakeholder management skills. ● Bachelor’s degree in Computer Science, Engineering, or a related field; Master's preferred.
Posted 1 week ago
3.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Project Role : Data Engineer Project Role Description : Design, develop and maintain data solutions for data generation, collection, and processing. Create data pipelines, ensure data quality, and implement ETL (extract, transform and load) processes to migrate and deploy data across systems. Must have skills : Databricks Unified Data Analytics Platform Good to have skills : NA Minimum 3 Year(s) Of Experience Is Required Educational Qualification : 15 years full time education Summary: As a Data Engineer, you will design, develop, and maintain data solutions that facilitate data generation, collection, and processing. Your typical day will involve creating data pipelines, ensuring data quality, and implementing ETL processes to migrate and deploy data across various systems. You will collaborate with cross-functional teams to understand data requirements and contribute to the overall data strategy, ensuring that the data architecture aligns with business objectives and supports analytical needs. Roles & Responsibilities: - Expected to perform independently and become an SME. - Required active participation/contribution in team discussions. - Contribute in providing solutions to work related problems. - Develop and optimize data pipelines to enhance data processing efficiency. - Collaborate with stakeholders to gather requirements and translate them into technical specifications. Professional & Technical Skills: - Must To Have Skills: Proficiency in Databricks Unified Data Analytics Platform. - Strong understanding of data modeling and database design principles. - Experience with ETL tools and data integration techniques. - Familiarity with cloud platforms and services related to data storage and processing. - Knowledge of programming languages such as Python or Scala for data manipulation. Additional Information: - The candidate should have minimum 3 years of experience in Databricks Unified Data Analytics Platform. - This position is based at our Chennai office. - A 15 years full time education is required., 15 years full time education
Posted 1 week ago
3.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Project Role : Data Engineer Project Role Description : Design, develop and maintain data solutions for data generation, collection, and processing. Create data pipelines, ensure data quality, and implement ETL (extract, transform and load) processes to migrate and deploy data across systems. Must have skills : Databricks Unified Data Analytics Platform Good to have skills : Business Agility Minimum 3 Year(s) Of Experience Is Required Educational Qualification : 15 years full time education Summary: As a Data Engineer, you will design, develop, and maintain data solutions that facilitate data generation, collection, and processing. Your typical day will involve creating data pipelines, ensuring data quality, and implementing ETL processes to migrate and deploy data across various systems. You will collaborate with cross-functional teams to understand data requirements and deliver effective solutions that meet business needs. Additionally, you will monitor and optimize data workflows to enhance performance and reliability, ensuring that data is accessible and actionable for stakeholders. Roles & Responsibilities: - Need Databricks resource with Azure cloud experience - Expected to perform independently and become an SME. - Required active participation/contribution in team discussions. - Contribute in providing solutions to work related problems. - Collaborate with data architects and analysts to design scalable data solutions. - Implement best practices for data governance and security throughout the data lifecycle. Professional & Technical Skills: - Must To Have Skills: Proficiency in Databricks Unified Data Analytics Platform. - Good To Have Skills: Experience with Business Agility. - Strong understanding of data modeling and database design principles. - Experience with data integration tools and ETL processes. - Familiarity with cloud platforms and services related to data storage and processing. Additional Information: - The candidate should have minimum 3 years of experience in Databricks Unified Data Analytics Platform. - This position is based at our Pune office. - A 15 years full time education is required., 15 years full time education
Posted 1 week ago
3.0 years
0 Lacs
Bhubaneswar, Odisha, India
On-site
Project Role : Data Engineer Project Role Description : Design, develop and maintain data solutions for data generation, collection, and processing. Create data pipelines, ensure data quality, and implement ETL (extract, transform and load) processes to migrate and deploy data across systems. Must have skills : Databricks Unified Data Analytics Platform Good to have skills : NA Minimum 3 Year(s) Of Experience Is Required Educational Qualification : 15 years full time education Summary: As a Data Engineer, you will design, develop, and maintain data solutions that facilitate data generation, collection, and processing. Your typical day will involve creating data pipelines, ensuring data quality, and implementing ETL processes to migrate and deploy data across various systems. You will collaborate with cross-functional teams to understand data requirements and deliver effective solutions that meet business needs, while also troubleshooting any issues that arise in the data flow and processing stages. Your role will be pivotal in ensuring that data is accessible, reliable, and ready for analysis, contributing to informed decision-making within the organization. Roles & Responsibilities: - Expected to perform independently and become an SME. - Required active participation/contribution in team discussions. - Contribute in providing solutions to work related problems. - Assist in the design and implementation of data architecture and data models. - Monitor and optimize data pipelines for performance and reliability. Professional & Technical Skills: - Must To Have Skills: Proficiency in Databricks Unified Data Analytics Platform. - Good To Have Skills: Experience with Apache Spark and data lake architectures. - Strong understanding of ETL processes and data integration techniques. - Familiarity with data quality frameworks and data governance practices. - Experience with cloud platforms such as AWS or Azure. Additional Information: - The candidate should have minimum 3 years of experience in Databricks Unified Data Analytics Platform. - This position is based at our Bhubaneswar office. - A 15 years full time education is required., 15 years full time education
Posted 1 week ago
3.0 years
0 Lacs
Bhubaneswar, Odisha, India
On-site
Project Role : Data Engineer Project Role Description : Design, develop and maintain data solutions for data generation, collection, and processing. Create data pipelines, ensure data quality, and implement ETL (extract, transform and load) processes to migrate and deploy data across systems. Must have skills : Databricks Unified Data Analytics Platform Good to have skills : Business Agility Minimum 3 Year(s) Of Experience Is Required Educational Qualification : 15 years full time education Summary: As a Data Engineer, you will design, develop, and maintain data solutions that facilitate data generation, collection, and processing. Your typical day will involve creating data pipelines, ensuring data quality, and implementing ETL processes to migrate and deploy data across various systems. You will collaborate with cross-functional teams to understand data requirements and deliver effective solutions that meet business needs. Additionally, you will monitor and optimize data workflows to enhance performance and reliability, ensuring that data is accessible and actionable for stakeholders. Roles & Responsibilities: - Need Databricks resource with Azure cloud experience - Expected to perform independently and become an SME. - Required active participation/contribution in team discussions. - Contribute in providing solutions to work related problems. - Collaborate with data architects and analysts to design scalable data solutions. - Implement best practices for data governance and security throughout the data lifecycle. Professional & Technical Skills: - Must To Have Skills: Proficiency in Databricks Unified Data Analytics Platform. - Good To Have Skills: Experience with Business Agility. - Strong understanding of data modeling and database design principles. - Experience with data integration tools and ETL processes. - Familiarity with cloud platforms and services related to data storage and processing. Additional Information: - The candidate should have minimum 3 years of experience in Databricks Unified Data Analytics Platform. - This position is based at our Pune office. - A 15 years full time education is required., 15 years full time education
Posted 1 week ago
3.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Project Role : Software Development Engineer Project Role Description : Analyze, design, code and test multiple components of application code across one or more clients. Perform maintenance, enhancements and/or development work. Must have skills : Databricks Unified Data Analytics Platform Good to have skills : NA Minimum 3 Year(s) Of Experience Is Required Educational Qualification : 15 years full time education Summary: As a Software Development Engineer, you will engage in a dynamic work environment where you will analyze, design, code, and test various components of application code for multiple clients. Your day will involve collaborating with team members to ensure the successful implementation of software solutions, performing maintenance and enhancements, and contributing to the overall development process. You will be responsible for delivering high-quality code while adhering to best practices and project timelines, ensuring that the applications meet the needs of the clients effectively. Roles & Responsibilities: - Expected to perform independently and become an SME. - Required active participation/contribution in team discussions. - Contribute in providing solutions to work related problems. - Collaborate with cross-functional teams to gather requirements and translate them into technical specifications. - Conduct code reviews to ensure adherence to coding standards and best practices. Professional & Technical Skills: - Must To Have Skills: Proficiency in Databricks Unified Data Analytics Platform. - Strong understanding of data processing and analytics workflows. - Experience with cloud-based data solutions and architectures. - Familiarity with programming languages such as Python or Scala. - Knowledge of data integration techniques and ETL processes. Additional Information: - The candidate should have minimum 3 years of experience in Databricks Unified Data Analytics Platform. - This position is based in Hyderabad. - A 15 years full time education is required., 15 years full time education
Posted 1 week ago
0 years
0 Lacs
India
On-site
Job Description: We are seeking a highly skilled Data Engineer with expertise in Azure Data Engineering platforms to join our team. The ideal candidate should have strong technical skills in data modelling, SQL, and Azure technologies, with a deep understanding of end-to-end data flows from ingestion to exploitation. Key Responsibilities: ● Build and optimize complex SQL queries for data pipelines and analytics. ● Design and implement efficient data models using robust data structures and algorithms. ● Manage and optimize data storage using Azure Databricks and Azure Data Lake Storage. ● Orchestrate complex pipelines using Azure Data Factory. ● Demonstrate a thorough understanding of the Azure Data Engineering Platform, including platform architecture and workflows. ● Collaborate with cross-functional teams to ensure seamless data flow and processing. ● Utilise Python for advanced data engineering tasks and automation. Requirements: ● SQL Expertise: Proficiency in building and optimizing queries. ● Experience in Python and Pyspark: Hands-on Experience with Python programming and Pyspark. ● Data Modeling: Experience with data structures, algorithms, and E2E data flows.
Posted 1 week ago
0 years
0 Lacs
India
On-site
Description: Senior Full Stack Developer Position Overview: We are seeking a highly skilled Full Stack Developer to join our dynamic team. The ideal candidate will possess a robust understanding of both front-end and back-end development, with a strong emphasis on creating and maintaining scalable, high-performance applications. This role requires a professional who can seamlessly integrate into our team, contributing to the development of innovative solutions that drive our trading operations. To be eligible for this role, you must be able to demonstrate: • Strong communication and interpersonal skills • Ability to collaborate effectively with internal and external customers • Innovative and analytical thinking • Ability to manage workload under time pressure and changing priorities • Adaptability and willingness to learn new technologies and methodologies Required Skills and Qualifications: • Technical Proficiency: • Expert Front-end React Framework & Backend Python Experience • Proficient in front-end technologies such as HTML, CSS, Strong back-end development skills, or similar languages. • Proficient GIT, & CI/CD experience. • Develop and maintain web applications using modern frameworks and technologies • Help maintain code quality, organization, and automation • Experience with relational database management systems. • Familiarity with cloud services (AWS, Azure, or Google Cloud – Primarily Azure). • Industry Knowledge: • Experience in the oil and gas industry, particularly within trading operations, is highly desirable. • Understanding of market data, trading systems, and financial instruments related to oil and gas. Preferred Qualifications: • Certifications in relevant technologies or methodologies. • Proven experience in building, operating, and supporting robust and performant databases and data pipelines. • Experience with Databricks and Snowflake • Solid understanding of web performance optimization, security, and best practices
Posted 1 week ago
2.0 years
0 Lacs
Kochi, Kerala, India
On-site
Job Title - + + Management Level: Location: Kochi, Coimbatore, Trivandrum Must have skills: Python/Scala, Pyspark/Pytorch Good to have skills: Redshift Job Summary You’ll capture user requirements and translate them into business and digitally enabled solutions across a range of industries. Your responsibilities will include: Responsibilities Roles and Responsibilities Designing, developing, optimizing, and maintaining data pipelines that adhere to ETL principles and business goals Solving complex data problems to deliver insights that helps our business to achieve their goals. Source data (structured→ unstructured) from various touchpoints, format and organize them into an analyzable format. Creating data products for analytics team members to improve productivity Calling of AI services like vision, translation etc. to generate an outcome that can be used in further steps along the pipeline. Fostering a culture of sharing, re-use, design and operational efficiency of data and analytical solutions Preparing data to create a unified database and build tracking solutions ensuring data quality Create Production grade analytical assets deployed using the guiding principles of CI/CD. Professional And Technical Skills Expert in Python, Scala, Pyspark, Pytorch, Javascript (any 2 at least) Extensive experience in data analysis (Big data- Apache Spark environments), data libraries (e.g. Pandas, SciPy, Tensorflow, Keras etc.), and SQL. 2-3 years of hands-on experience working on these technologies. Experience in one of the many BI tools such as Tableau, Power BI, Looker. Good working knowledge of key concepts in data analytics, such as dimensional modeling, ETL, reporting/dashboarding, data governance, dealing with structured and unstructured data, and corresponding infrastructure needs. Worked extensively in Microsoft Azure (ADF, Function Apps, ADLS, Azure SQL), AWS (Lambda,Glue,S3), Databricks analytical platforms/tools, Snowflake Cloud Datawarehouse. Additional Information Experience working in cloud Data warehouses like Redshift or Synapse Certification in any one of the following or equivalent AWS- AWS certified data Analytics- Speciality Azure- Microsoft certified Azure Data Scientist Associate Snowflake- Snowpro core- Data Engineer Databricks Data Engineering About Our Company | Accenture , Experience: 3.5 -5 years of experience is required Educational Qualification: Graduation (Accurate educational details should capture)
Posted 1 week ago
3.0 years
0 Lacs
Indore, Madhya Pradesh, India
On-site
Project Role : Data Engineer Project Role Description : Design, develop and maintain data solutions for data generation, collection, and processing. Create data pipelines, ensure data quality, and implement ETL (extract, transform and load) processes to migrate and deploy data across systems. Must have skills : Databricks Unified Data Analytics Platform Good to have skills : NA Minimum 3 Year(s) Of Experience Is Required Educational Qualification : 15 years full time education Summary: As a Data Engineer, you will design, develop, and maintain data solutions that facilitate data generation, collection, and processing. Your typical day will involve creating data pipelines, ensuring data quality, and implementing ETL processes to migrate and deploy data across various systems. You will collaborate with cross-functional teams to understand data requirements and contribute to the overall data strategy, ensuring that the data architecture aligns with business objectives and supports analytical needs. Roles & Responsibilities: - Expected to perform independently and become an SME. - Required active participation/contribution in team discussions. - Contribute in providing solutions to work related problems. - Assist in the design and implementation of data architecture to support data initiatives. - Monitor and optimize data pipelines for performance and reliability. Professional & Technical Skills: - Must To Have Skills: Proficiency in Databricks Unified Data Analytics Platform. - Strong understanding of data modeling and database design principles. - Experience with ETL tools and processes. - Familiarity with cloud platforms and services related to data storage and processing. - Knowledge of programming languages such as Python or Scala for data manipulation. Additional Information: - The candidate should have minimum 3 years of experience in Databricks Unified Data Analytics Platform. - This position is based at our Indore office. - A 15 years full time education is required., 15 years full time education
Posted 1 week ago
3.0 years
0 Lacs
Indore, Madhya Pradesh, India
On-site
Project Role : Data Engineer Project Role Description : Design, develop and maintain data solutions for data generation, collection, and processing. Create data pipelines, ensure data quality, and implement ETL (extract, transform and load) processes to migrate and deploy data across systems. Must have skills : Databricks Unified Data Analytics Platform Good to have skills : Business Agility Minimum 3 Year(s) Of Experience Is Required Educational Qualification : 15 years full time education Summary: As a Data Engineer, you will design, develop, and maintain data solutions that facilitate data generation, collection, and processing. Your typical day will involve creating data pipelines, ensuring data quality, and implementing ETL processes to migrate and deploy data across various systems. You will collaborate with cross-functional teams to understand data requirements and deliver effective solutions that meet business needs. Additionally, you will monitor and optimize data workflows to enhance performance and reliability, ensuring that data is accessible and actionable for stakeholders. Roles & Responsibilities: - Need Databricks resource with Azure cloud experience - Expected to perform independently and become an SME. - Required active participation/contribution in team discussions. - Contribute in providing solutions to work related problems. - Collaborate with data architects and analysts to design scalable data solutions. - Implement best practices for data governance and security throughout the data lifecycle. Professional & Technical Skills: - Must To Have Skills: Proficiency in Databricks Unified Data Analytics Platform. - Good To Have Skills: Experience with Business Agility. - Strong understanding of data modeling and database design principles. - Experience with data integration tools and ETL processes. - Familiarity with cloud platforms and services related to data storage and processing. Additional Information: - The candidate should have minimum 3 years of experience in Databricks Unified Data Analytics Platform. - This position is based at our Pune office. - A 15 years full time education is required., 15 years full time education
Posted 1 week ago
3.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Project Role : Data Engineer Project Role Description : Design, develop and maintain data solutions for data generation, collection, and processing. Create data pipelines, ensure data quality, and implement ETL (extract, transform and load) processes to migrate and deploy data across systems. Must have skills : Databricks Unified Data Analytics Platform Good to have skills : Business Agility Minimum 3 Year(s) Of Experience Is Required Educational Qualification : 15 years full time education Summary: As a Data Engineer, you will design, develop, and maintain data solutions that facilitate data generation, collection, and processing. Your typical day will involve creating data pipelines, ensuring data quality, and implementing ETL processes to migrate and deploy data across various systems. You will collaborate with cross-functional teams to understand data requirements and deliver effective solutions that meet business needs, while also troubleshooting any issues that arise in the data flow and processing stages. Your role will be pivotal in enhancing the overall data infrastructure and supporting data-driven decision-making within the organization. Roles & Responsibilities: - Expected to perform independently and become an SME. - Required active participation/contribution in team discussions. - Contribute in providing solutions to work related problems. - Collaborate with stakeholders to gather and analyze data requirements. - Design and implement robust data pipelines to ensure efficient data flow. Professional & Technical Skills: - Must To Have Skills: Proficiency in Databricks Unified Data Analytics Platform. - Good To Have Skills: Experience with Business Agility. - Strong understanding of data modeling and database design principles. - Experience with ETL tools and data integration techniques. - Familiarity with cloud platforms and data storage solutions. Additional Information: - The candidate should have minimum 3 years of experience in Databricks Unified Data Analytics Platform. - This position is based at our Noida office. - A 15 years full time education is required., 15 years full time education
Posted 1 week ago
3.0 years
0 Lacs
Ahmedabad, Gujarat, India
On-site
Project Role : Data Platform Engineer Project Role Description : Assists with the data platform blueprint and design, encompassing the relevant data platform components. Collaborates with the Integration Architects and Data Architects to ensure cohesive integration between systems and data models. Must have skills : Databricks Unified Data Analytics Platform Good to have skills : NA Minimum 3 Year(s) Of Experience Is Required Educational Qualification : 15 years full time education Summary: As a Data Platform Engineer, you will assist with the data platform blueprint and design, collaborating with Integration Architects and Data Architects to ensure cohesive integration between systems and data models. You will play a crucial role in shaping the data platform components. Roles & Responsibilities: - Expected to perform independently and become an SME. - Required active participation/contribution in team discussions. - Contribute in providing solutions to work related problems. - Collaborate with cross-functional teams to design and implement data platform solutions. - Develop and maintain data pipelines for efficient data processing. - Implement data security and privacy measures to protect sensitive information. - Optimize data storage and retrieval processes for improved performance. - Conduct regular data platform performance monitoring and troubleshooting. Professional & Technical Skills: - Must To Have Skills: Proficiency in Databricks Unified Data Analytics Platform. - Strong understanding of cloud-based data platforms. - Experience with data modeling and database design. - Hands-on experience with ETL processes and tools. - Knowledge of data governance and compliance standards. Additional Information: - The candidate should have a minimum of 3 years of experience in Databricks Unified Data Analytics Platform. - This position is based at our Ahmedabad office. - A 15 years full time education is required., 15 years full time education
Posted 1 week ago
0.0 - 2.0 years
5 - 12 Lacs
Pune, Maharashtra
On-site
Company name: PibyThree consulting Services Pvt Ltd. Location : Baner, Pune Start date : ASAP Job Description : We are seeking an experienced Data Engineer to join our team. The ideal candidate will have hands-on experience with Azure Data Factory (ADF), Snowflake, and data warehousing concepts. The Data Engineer will be responsible for designing, developing, and maintaining large-scale data pipelines and architectures. Key Responsibilities: Design, develop, and deploy data pipelines using Azure Data Factory (ADF) Work with Snowflake to design and implement data warehousing solutions Collaborate with cross-functional teams to identify and prioritize data requirements Develop and maintain data architectures, data models, and data governance policies Ensure data quality, security, and compliance with regulatory requirements Optimize data pipelines for performance, scalability, and reliability Troubleshoot data pipeline issues and implement fixes Stay up-to-date with industry trends and emerging technologies in data engineering Requirements: 4+ years of experience in data engineering, with a focus on cloud-based data platforms (Azure preferred) 2+ years of hands-on experience with Azure Data Factory (ADF) 1+ year of experience working with Snowflake Strong understanding of data warehousing concepts, data modeling, and data governance Experience with data pipeline orchestration tools such as Apache Airflow or Azure Databricks Proficiency in programming languages such as Python, Java, or C# Experience with cloud-based data storage solutions such as Azure Blob Storage or Amazon S3 Strong problem-solving skills and attention to detail Job Type: Full-time Pay: ₹500,000.00 - ₹1,200,000.00 per year Schedule: Day shift Ability to commute/relocate: Pune, Maharashtra: Reliably commute or planning to relocate before starting work (Preferred) Education: Bachelor's (Preferred) Experience: total work: 4 years (Preferred) Pyspark: 2 years (Required) Azure Data Factory: 2 years (Required) Databricks: 2 years (Required) Work Location: In person
Posted 1 week ago
0 years
0 Lacs
Pune, Maharashtra, India
On-site
Role: DATA ENGINEER Skillsets (must have): AZURE Databricks, AZURE Data factory Programming - Python, PySpark, T-SQL, PL-SQL Agile Software Development Life Cycle and Scrum methodology Healthcare knowledge
Posted 1 week ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
31458 Jobs | Dublin
Wipro
16542 Jobs | Bengaluru
EY
10788 Jobs | London
Accenture in India
10711 Jobs | Dublin 2
Amazon
8660 Jobs | Seattle,WA
Uplers
8559 Jobs | Ahmedabad
IBM
7988 Jobs | Armonk
Oracle
7535 Jobs | Redwood City
Muthoot FinCorp (MFL)
6170 Jobs | New Delhi
Capgemini
6091 Jobs | Paris,France