Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
8.0 - 12.0 years
16 - 18 Lacs
Bengaluru
Work from Office
Job Title: Senior Data Scientist Location: Bangalore Experience: 8 - 12 Years Job Summary We are seeking a highly skilled and experienced Senior Data Scientist to join our team in Bangalore. The ideal candidate will have a deep understanding of Machine Learning (ML) and Artificial Intelligence (AI), with a strong focus on Azure Fabric within the banking and finance domain. In this role, you will develop and implement advanced data-driven solutions that enhance decision-making, optimise processes, and contribute to the success of our client s financial objectives. Mandatory Skills Proven experience in traditional Machine Learning (ML) and Artificial Intelligence (AI). Strong experience in Azure Fabric and its integration with various banking systems. Expertise in Data Science methodologies, predictive modelling, and statistical analysis. Solid understanding of the Finance domain with a focus on banking processes and challenges. Hands-on experience with big data technologies and cloud platforms (Azure, AWS). Proficiency in Python-related data science libraries (e.g., Pandas, NumPy, Scikit-learn). Experience in data processing, ETL pipelines, and data engineering. Familiarity with SQL and NoSQL databases. Key Responsibilities Design and implement Machine Learning (ML) and Artificial Intelligence (AI) models to solve complex business problems in the finance sector. Work closely with business stakeholders to understand requirements and translate them into data-driven solutions. Develop and deploy ML models on Azure Fabric, ensuring their scalability and efficiency. Analyze large datasets to identify trends, patterns, and insights to support decision-making. Collaborate with cross-functional teams to integrate AI/ML solutions into business processes and banking systems. Maintain and optimise deployed models and ensure their continuous performance. Keep up to date with industry trends, technologies, and best practices in AI and ML, specifically within the finance industry. Qualifications Education: Bachelor s/Master s degree in Computer Science, Data Science, Engineering, or related field. Certifications: Relevant certifications in Data Science, Azure AI, or Machine Learning is a plus. Technical Skills Expertise in Machine Learning (ML) algorithms (Supervised and Unsupervised). Strong experience with Azure Fabric and related Azure cloud services. Proficient in Python, R, and data science libraries (Pandas, Scikit-learn, TensorFlow). Experience in AI and Deep Learning models, including neural networks. Working knowledge of big data technologies such as Spark, Hadoop, and Databricks. Experience with version control systems (Git, GitHub, etc.). Soft Skills Excellent problem-solving and analytical skills. Strong communication skills, with the ability to present complex data insights clearly to non-technical stakeholders. Ability to work effectively in a collaborative, cross-functional environment. Strong attention to detail and ability to manage multiple tasks simultaneously. A passion for continuous learning and staying updated on new technologies. Experience in the banking or financial services industry. Familiarity with DevOps practices for ML/AI model deployment. Knowledge of cloud-native architecture and containerization (Docker, Kubernetes). Familiarity with Deep Learning and Natural Language Processing (NLP) techniques. 8-12 years of experience in Data Science, with hands-on experience in ML, AI, and working within the finance or banking industry. Proven track record of designing and deploying machine learning models and working with Azure Fabric. Experience with client-facing roles and delivering solutions that impact business decision-making. Competitive salary and annual performance-based bonuses Comprehensive health and optional Parental insurance. Retirement savings plans and tax savings plans. Work-Life Balance: Flexible work hours Timely and effective delivery of ML/AI models that solve complex business problems. Continuous improvement and optimisation of deployed models. High-quality insights and data-driven solutions delivered for business stakeholders. Client satisfaction with AI/ML solutions implemented within the banking domain. Number of successful ML/AI models deployed and their performance post-deployment. Model accuracy and predictive capability (based on business goals). Client feedback on AI-driven solutions. Completion time for delivering actionable data-driven insights. Team collaboration and mentoring effectiveness with junior data scientists. Click here to upload your CV / Resume We accept PDF, DOC, DOCX, JPG and PNG files Verification code successfully sent to registered email Invalid Verification Code! Thanks for the verification! Our support team will contact you shortly!.
Posted 5 days ago
2.0 - 4.0 years
5 - 9 Lacs
Bengaluru
Work from Office
Cloud Developer This role has been designed as Onsite with an expectation that you will primarily work from an HPE office. Who We Are: Hewlett Packard Enterprise is the global edge-to-cloud company advancing the way people live and work. We help companies connect, protect, analyze, and act on their data and applications wherever they live, from edge to cloud, so they can turn insights into outcomes at the speed required to thrive in today s complex world. Our culture thrives on finding new and better ways to accelerate what s next. We know varied backgrounds are valued and succeed here. We have the flexibility to manage our work and personal needs. We make bold moves, together, and are a force for good. If you are looking to stretch and grow your career our culture will embrace you. Open up opportunities with HPE. Job Description: In the HPE Hybrid Cloud , we lead the innovation agenda and technology roadmap for all of HPE. This includes managing the design, development, and product portfolio of our next-generation cloud platform, Green Lake. Working with customers, we help them reimagine their information technology needs to deliver a simple, consumable solution that helps them drive their business results. Join us redefine what s next for you. Job Family Definition: The Cloud Developer builds from the ground up to meet the needs of mission-critical applications, and is always looking for innovative approaches to deliver end-to-end technical solutions to solve customer problems. Brings technical thinking to break down complex data and to engineer new ideas and methods for solving, prototyping, designing, and implementing cloud-based solutions. Collaborates with project managers and development partners to ensure effective and efficient delivery, deployment, operation, monitoring, and support of Cloud engagements. The Cloud Developer provides business value expertise to drive the development of innovative service offerings that enrich HPEs Cloud Services portfolio across multiple systems, platforms, and applications. Management Level Definition: Contributions include applying intermediate level of subject matter expertise to solve common technical problems. Acts as an informed team member providing analysis of information and recommendations for appropriate action. Works independently within an established framework and with moderate supervision. What you will do: Designs simple to moderate cloud application features as per specifications. Develops and maintains cloud application modules adhering to security policies. Designs test plans, develops, executes, and automates test cases for assigned portions of the developed code. Deploys code and troubleshoots issues in application modules and deployment environment. Shares and reviews innovative technical ideas with peers, high-level technical contributors, technical writers, and managers. Analyses science, engineering, business, and other data processing problems to develop and implement solutions to complex application problems, system administration issues, or network concerns. What you will need: Bachelors degree in computer science, engineering, information systems, or closely related quantitative discipline. Master s desirable. Typically, 2-4 years experience. Knowledge and Skills: Strong programming skills in Python or Golang. Expertise in development of micro services and deploying this on Kubernetes environment Understanding and work experience in GitOps, Devops, CD/CD tooling, concepts of package management software, software deployment and life cycle management Experience in architecting software deployments, scripting, deployment tools like chef, puppet ansible, orchestration tools like Terraform Good to have Enterprise Data Center Infrastructure knowledge (Servers, Storage, Networking) Experience with design methodologies, cloud-native applications, developer tools, managed services, and next-generation databases. Good written and verbal communication skills. Ability to quickly learn new skills and technologies and work well with other team members. Understanding DevOps practices like continuous integration/deployment and orchestration with Kubernetes. Additional Skills: Cloud Architectures, Cross Domain Knowledge, Design Thinking, Development Fundamentals, DevOps, Distributed Computing, Microservices Fluency, Full Stack Development, Release Management, Security-First Mindset, User Experience (UX) What We Can Offer You: Health & Wellbeing We strive to provide our team members and their loved ones with a comprehensive suite of benefits that supports their physical, financial and emotional wellbeing. Personal & Professional Development We also invest in your career because the better you are, the better we all are. We have specific programs catered to helping you reach any career goals you have whether you want to become a knowledge expert in your field or apply your skills to another division. Unconditional Inclusion We are unconditionally inclusive in the way we work and celebrate individual uniqueness. We know varied backgrounds are valued and succeed here. We have the flexibility to manage our work and personal needs. We make bold moves, together, and are a force for good. Lets Stay Connected: Follow @HPECareers on Instagram to see the latest on people, culture and tech at HPE. #india #hybridcloud Job: Engineering Job Level: TCP_02 HPE is an Equal Employment Opportunity/ Veterans/Disabled/LGBT employer. We do not discriminate on the basis of race, gender, or any other protected category, and all decisions we make are made on the basis of qualifications, merit, and business need. Our goal is to be one global team that is representative of our customers, in an inclusive environment where we can continue to innovate and grow together. Please click here: Equal Employment Opportunity . Hewlett Packard Enterprise is EEO Protected Veteran/ Individual with Disabilities. HPE will comply with all applicable laws related to employer use of arrest and conviction records, including laws requiring employers to consider for employment qualified applicants with criminal histories.
Posted 5 days ago
1.0 - 5.0 years
15 - 17 Lacs
Mumbai
Work from Office
Your mission As a Software Engineer II AI, you will be at the forefront of building agent-based systems and intelligent pipelines that combine structured geospatial data with unstructured content. You ll collaborate with AI researchers, data scientists, and product engineers to deliver scalable, high-performing solutions. If you thrive on pushing the boundaries of what AI can do in the real world especially in dynamic environments like map making this is the role for you. Your Tasks Design and develop agentic systems using Python and frameworks like Lang Chain or Haystack Build and optimize scalable Retrieval-Augmented Generation (RAG) pipelines using LLMs and vector stores Integrate AI reasoning engines with data sources (SQL, NoSQL, REST APIs, file systems) Enhance system observability and monitoring for intelligent agents and workflows Implement testable, modular, and well-documented code with focus on production-readiness Collaborate with ML and backend engineers to tune performance and cost-efficiency Stay up-to-date with the latest developments in LLMs, multi-agent systems, and semantic retrieval What you should bring along Bachelor s or Master s in Computer Science, Artificial Intelligence, or equivalent 3+ years of experience in software development, with at least 2+ years in AI/ML systems Strong programming skills in Python, including async and multiprocessing capabilities Deep understanding of LLM integration patterns (OpenAI, Hugging Face, etc.) Experience building scalable RAG architectures with vector databases like FAISS, Weaviate, or Pinecone Familiarity with prompt engineering, semantic search, and knowledge graphs Proficiency in designing backend services with RESTful APIs and microservices Working knowledge of containerization (Docker), CI/CD pipelines, and cloud platforms (preferably AWS) Excellent communication and documentation skills Who are you Develop, extend and maintain AI-powered software products in an innovative and iteratively growing environment Implement tools to enhance both automated and semi-automated map data processing, combining backend/service-based software stacks and AI-based agentic workflows Build dashboards or monitoring systems to visualize agent reasoning and RAG system metrics
Posted 5 days ago
2.0 - 8.0 years
18 - 20 Lacs
Bengaluru
Work from Office
Not Applicable Specialism Data, Analytics & AI Management Level Senior Associate & Summary . In business intelligence at PwC, you will focus on leveraging data and analytics to provide strategic insights and drive informed decisionmaking for clients. You will develop and implement innovative solutions to optimise business performance and enhance competitive advantage. Why PWC At PwC, you will be part of a vibrant community of solvers that leads with trust and creates distinctive outcomes for our clients and communities. This purposeled and valuesdriven work, powered by technology in an environment that drives innovation, will enable you to make a tangible impact in the real world. We reward your contributions, support your wellbeing, and offer inclusive benefits, flexibility programmes and mentorship that will help you thrive in work and life. Together, we grow, learn, care, collaborate, and create a future of infinite experiences for each other. Learn more about us . & Summary A career within Data and Analytics services will provide you with the opportunity to help organisations uncover enterprise insights and drive business results using smarter data analytics. We focus on a collection of organisational technology capabilities, including business intelligence, data management, and data assurance that help our clients drive innovation, growth, and change within their organisations in order to keep up with the changing nature of customers and technology. We make impactful decisions by mixing mind and machine to leverage data, understand and navigate risk, and help our clients gain a competitive edge. Responsibilities This role is accountable for Designing and Implementing Integrations Build, deploy, and maintain integrations using cloudnative services (AWS, GCP) and middleware tools. API and EventDriven Architectures Develop and manage APIs, API gateways, and Kafkabased event streaming solutions for realtime data processing. Scalability and Performance Optimization Ensure integrations are robust, secure, and optimized for performance and costeffectiveness. Monitoring and Troubleshooting Proactively identify and resolve integration failures, ensuring system reliability and minimal downtime. Collaboration and Documentation Work with crossfunctional teams to understand integration needs and maintain clear technical documentation. Mandatory skill sets Must have knowledge, skills and experiences Strong understanding of CI/CD pipelines and Infrastructure as code principles such as Terraform. Experience with CI/CD tooling such as GitHub, Jenkins, Codefresh , Docker, Kubernetes. Experienced in building RestFul API s using Java (Springboot). Experienced in AWS development environment and ecosystem Cloud native and digital solutioning leveraging emerging technologies incl. containers, serverless, data, API and Microservices etc Experience with measuring, analysing, monitoring, and optimizing cloud performance including cloud system reliability and availability Understanding of storage solutions, networking, and security. Strong Familiarity with cloud platform specific Well Architected Frameworks Production experience of running services in Kubernetes. Preferred skill sets Good to have knowledge, skills and experiences The good to have knowledge, skill and experience (KSE) the role requires are MultiCloud Experience Exposure to both AWS and GCP integration services and the ability to work across different cloud providers. Message Queue Systems Experience with messaging systems like RabbitMQ, ActiveMQ, or Google Cloud Pub/Sub Serverless Architectures Handson experience with serverless computing and eventdriven workflows. Observability and Logging Knowledge of monitoring and logging tools such as Prometheus, Grafana, AWS CloudWatch, or GCP Stackdriver. Security Best Practices Understanding of cloud security, encryption, and compliance frameworks (e.g., IAM policies, SOC2, GDPR). Testing and Automation Knowledge of API testing frameworks (Postman, Karate, or REST Assured) and automation testing for integrations. Kubernetes and Containerization Experience deploying and managing integrations in containerized environments using Kubernetes (EKS/GKE). Networking Concepts Understanding of VPCs, private link, service mesh, and hybrid cloud connectivity. Knowledge of ANZ Retail and Commercial Banking Systems Familiarity with ANZ banking technology ecosystems, core banking platforms, payment systems, and regulatory requirements. Years of experience required o 7 to 8 years (23 years relevant) Education qualification o BE, B.Tech, ME, M,Tech, MBA, MCA (60% above) Education Degrees/Field of Study required Master of Engineering, Master of Business Administration, Bachelor of Engineering Degrees/Field of Study preferred Required Skills Power BI Accepting Feedback, Accepting Feedback, Active Listening, Analytical Thinking, Business Case Development, Business Data Analytics, Business Intelligence and Reporting Tools (BIRT), Business Intelligence Development Studio, Communication, Competitive Advantage, Continuous Process Improvement, Creativity, Data Analysis and Interpretation, Data Architecture, Database Management System (DBMS), Data Collection, Data Pipeline, Data Quality, Data Science, Data Visualization, Embracing Change, Emotional Regulation, Empathy, Inclusion, Industry Trend Analysis {+ 16 more} No
Posted 5 days ago
1.0 - 4.0 years
3 - 6 Lacs
Gurugram
Work from Office
About this role BlackRock is seeking a highly skilled and motivated Analyst to support its growing and dynamic Client Data function! In this role, you will be responsible to drive the accuracy, quality and consistent use of the most impactful, globally relevant data fields, facilitating scale & efficiency across BLK s global sales and service ecosystem. You will work closely with cross-functional teams including business stakeholders and technical teams for Client Data to establish standards for the entry and maintenance of client data, implement exception monitoring to identify data inconsistencies and complete high-risk updates where required. At BlackRock, we are dedicated to encouraging an inclusive environment where every team member can thrive and contribute to our world-class success. This is your chance to be part of a firm that is not only ambitious but also committed to delivering flawless and proven investment strategies. Key Responsibilities: As a Data Analyst, you will play a pivotal role in ensuring the accuracy and efficiency of our client data. Your responsibilities will include: Data Governance & Quality: Monitor data health and integrity, and ensure data products meet strict standards for accuracy, completeness, and consistency. Conduct regular assessments to identify deficiencies and opportunities for improvement. Data Management : Maintain, cleanse and update records within the Client Relationship Management systems. This may include researching information across a variety of data sources, working with internal client support groups to create data structures that mimic client asset pools and connecting client information across data sources. Process Improvement and Efficiency : Identify and complete process improvements from initial ideation to implementation. Collaborate with cross-functional teams product managers, engineers, and business stakeholders to plan, design, and deliver data products. Quality Assurance : Collaborate with teams to test new CRM features, ensuring tools function accurately and identifying defects for resolution. Collaboration & Communication: Prioritize effectively with various collaborators across BlackRock. Ensure efficient and timely data governance and maintenance in an agile environment. Qualifications & Requirements: We seek candidates who are ambitious, diligent, and have a proven track record in data management. The ideal candidate will possess the following qualifications: Experience: MBA or equivalent experience required; major in Business, Finance, MIS, Computer Science or related fields preferred 1 to 4 years of experience in data management or data processing Financial services industry experience is a plus but not required Skills and Qualifications: Proficiency in SQL; Python experience a plus Proficiency in data management / reporting tools and technologies such as POWER BI a plus Experience with business applications including Excel and PowerPoint Experience working with CRM platforms; Microsoft Dynamics experience a plus Organized and detail-oriented with strong time management skills Self-motivated with a strong focus on service and ability to liaise with many groups across the company Excellent online research skills Exceptional written and verbal communication skills Our benefits To help you stay energized, engaged and inspired, we offer a wide range of benefits including a strong retirement plan, tuition reimbursement, comprehensive healthcare, support for working parents and Flexible Time Off (FTO) so you can relax, recharge and be there for the people you care about. Our hybrid work model . About BlackRock . This mission would not be possible without our smartest investment - the one we make in our employees. It s why we re dedicated to creating an environment where our colleagues feel welcomed, valued and supported with networks, benefits and development opportunities to help them thrive. For additional information on BlackRock, please visit @blackrock | Twitter: @blackrock | LinkedIn: www.linkedin.com / company / blackrock BlackRock is proud to be an Equal Opportunity Employer. We evaluate qualified applicants without regard to age, disability, family status, gender identity, race, religion, sex, sexual orientation and other protected attributes at law. #EarlyCareers Our benefits . Our hybrid work model . About BlackRock . This mission would not be possible without our smartest investment - the one we make in our employees. It s why we re dedicated to creating an environment where our colleagues feel welcomed, valued and supported with networks, benefits and development opportunities to help them thrive. For additional information on BlackRock, please visit @blackrock | Twitter: @blackrock | LinkedIn: www.linkedin.com / company / blackrock BlackRock is proud to be an Equal Opportunity Employer. We evaluate qualified applicants without regard to age, disability, family status, gender identity, race, religion, sex, sexual orientation and other protected attributes at law.
Posted 5 days ago
2.0 - 3.0 years
4 - 5 Lacs
Mumbai
Work from Office
Job Overview We are looking for an experienced Data Engineer with at least 2-3 years of hands-on experience in designing and maintaining scalable data infrastructure. You will work with cross-functional teams to ensure high-quality, accessible, and well-structured data systems that support business intelligence, analytics, and other data-driven needs. Key Responsibilities Design, develop, and maintain robust ETL/ELT pipelines for ingesting, transforming, and loading data. Build and manage scalable data architectures on cloud platforms such as AWS, GCP, or Azure. Ensure data quality, integrity, and consistency through validation, auditing, and monitoring. Collaborate with analytics and product teams to gather data requirements and deliver optimized data sets. Implement and manage data lakes and data warehouses using tools like Redshift, BigQuery, Azure or Snowflake. Develop automated workflows using orchestration tools like Apache Airflow or AWS Step Functions. Optimize data processing workflows for performance and cost-efficiency. Document data models, data flows, and transformation logic to ensure transparency and maintainability. Enforce data security, privacy, and governance best practices. Perform regular data audits and troubleshooting for data issues. Required Skills & Qualifications Bachelor s or Master s degree in Computer Science, Information Systems, or related field. Minimum 2-3 years of experience in data engineering roles. Strong hands-on experience with cloud data services (e.g., AWS S3, Glue, Athena, Redshift; or equivalents in GCP/Azure). Proficiency in Python, PySpark, and SQL for data manipulation and scripting. Experience with big data tools such as Spark or Hadoop. Solid understanding of data modeling, warehousing concepts, and distributed systems. Strong problem-solving skills with a focus on debugging data quality and performance issues. Experience with data visualization tools or platforms is a plus (e.g., Power BI, Tableau, Looker). Good to Have Experience with version control and CI/CD for data pipelines. Familiarity with message queue systems like Kafka or Kinesis. Exposure to real-time data processing. Knowledge of infrastructure-as-code tools like Terraform or CloudFormation.
Posted 5 days ago
8.0 - 15.0 years
30 - 35 Lacs
Bengaluru
Work from Office
Associate Director - Data Engineering Associate Director - Data Engineering About Junglee Games: With over 140 million users, Junglee Games is a leader in the online skill gaming space. Founded in San Francisco in 2012 and part of the Flutter Entertainment Group, we are revolutionizing how people play games. Our notable games include Howzat, Junglee Rummy, and Junglee Poker. Our team comprises over 900 talented individuals who have worked on internationally acclaimed AAA titles like Transformers and Star Wars: The Old Republic and contributed to Hollywood hits such as Avatar. Junglee s mission is to build entertainment for millions of people around the world and connect them through games. Junglee Games is not just a gaming company but a blend of innovation, data science, cutting-edge tech, and, most importantly, a values-driven culture that is creating the next set of conscious leaders. Job overview: As our Associate Director, Data Engineering you will be responsible for leading a highly qualified team and solve complex problems. You will partner with multiple stakeholders across the organization to efficiently deliver data infrastructure, data modeling, Generative AI solutions and also play the role of an architect in the team. You will have a very good understanding of data visualisation and analytics. Job Location: Bangalore Key responsibilities: Highly developed verbal and written communication skills, with the ability to work up and down within the organization to influence others and achieve results Design and implement Generative AI solutions (e.g., LLMs, diffusion models, transformers) for real-world gaming applications, including Fraud Detection, Recommender Systems, Responsible Gaming, Conversational AI (chatbots, virtual assistants) etc Build robust pipelines for model training, inference, and continuous learning Partner with key stakeholders across all levels to drive solutions to meet business needs Line Manager a team of 7 direct report Data and Analytics Engineers Accountable for the technical delivery of technical Features to achieve business outcomes Mentor a team of data engineers fostering a culture of innovation. Drive visualisation strategy, ensuring that the business users have access to clear, actionable dashboards and reports. Participates in solution approaches / designs and operating principles Initial point of escalation for team to remove blockers Manage the development of efficient ETL processes to gather, clean, and transform data from various sources Stay abreast of emerging technologies and tools, evaluating their potential to enhance our data capabilities Foster an Engineering Mindset Demonstrates a commitment and passion for associate development, driving the talent agenda. Sets clear goals and expectations around performance, providing timely feedback and stretching targets Stay current with the latest in GenAI research and tooling, bringing innovative approaches to production. Ensure high data availability, reliability and performance across the stack Qualifications & skills required Typically has 8-15 years prior experience in Data engineering with at least 1-2 years focused on Generative AI or LLM 3 - 5 Years of team management experience Experience deploying ML models in production environments (REST APIs, microservices, etc.). Hands-on experience with frameworks such as LangChain, LlamaIndex, or Retrieval-Augmented Generation (RAG) systems Experience managing multiple concurrent projects and development teams Experience with stakeholder management Deep understanding of ETL/ELT workflows , batch and real time data processing. Proficient with AWS Cloud technologies (i.e. S3, Lambda, Dynamo, EC2), Python, Spark Strong proficiency in data warehousing, ETL processes. Be a part of Junglee Games to: Value Customers & Data - Prioritize customers, use data-driven decisions, master KPIs, and leverage ideation and A/B testing to drive impactful outcomes. Inspire Extreme Ownership - We embrace ownership, collaborate effectively, and take pride in every detail to ensure every game becomes a smashing success. Lead with Love - We reject micromanagement and fear, fostering open dialogue, mutual growth, and a fearless yet responsible work ethic. Embrace change - Change drives progress and our strength lies in adapting swiftly and recognizing when to evolve to stay ahead. Play the Big Game - We think big, challenge norms, and innovate boldly, driving impactful results through fresh ideas and inventive problem-solving. Avail a comprehensive benefits package that includes paid gift coupons, fitness plans, gadget allowances, fuel costs, family healthcare, and much more. Know more about us: Explore the world of Junglee Games through our website, www.jungleegames.com . Get a glimpse of what Life at Junglee Games looks like on LinkedIn . Here is a quick snippet of the Junglee Games Offsite 24 Liked what you saw so far? Be A Junglee
Posted 5 days ago
5.0 - 10.0 years
4 - 8 Lacs
Hyderabad
Work from Office
Job Announcement for a post of Remote Sensing GIS Specialist by | Sep 3, 2023 | | Join Our Team as a Remote Sensing GIS Specialist At BlueEnergy Build Private Limited (BEBPL), we re a dynamic company specializing in multi-sector investigations. Our primary focus involves conducting comprehensive studies through hydrological, geological, geophysical, and hydrogeological investigations. We take immense pride in our commitment to providing clients with accurate and insightful data. We understand that the success of any project hinges on a profound understanding of the subsurface. That s why we ve assembled a team of experts from diverse fields, each contributing their unique expertise to ensure our investigations are thorough and precise. Whether we re assessing groundwater resources for agriculture, deciphering geological formations for infrastructure projects, or conducting hydrogeological studies for environmental conservation, BEBPL is dedicated to delivering high-quality services tailored to our clients specific needs. We value excellence, innovation, and collaboration and are driven by a shared passion for advancing knowledge in the field of subsurface investigations. Committed to the highest standards of professionalism and ethics, BEBPL strives to be a trusted partner for projects spanning various sectors. Position: Remote Sensing and GIS Specialist Location: Hyderabad Qualifications: M.Sc. in Remote Sensing, Geographic Information System, Geology, Hydrology, Spatial Information Technology, or any other relevant course. Minimum 5 years of professional experience in image processing, GIS mapping, and related works. Responsibilities: Utilize remote sensing and GIS tools to process and analyze geospatial data. Conduct image processing, interpretation, and analysis of satellite and aerial imagery. Create and maintain detailed GIS maps, databases, and spatial models. Collaborate with multidisciplinary teams to provide geospatial solutions for various projects. Implement advanced spatial analysis techniques to derive meaningful insights from geospatial data. Manage and maintain geospatial databases, ensuring data accuracy and quality. Stay updated with the latest advancements in remote sensing and GIS technologies and recommend their application to improve processes. Effectively communicate findings and present geospatial information to stakeholders. Requirements: Strong academic background with an M.Sc. in a relevant field. Proficiency in remote sensing and GIS software such as ArcGIS, QGIS, ENVI, or similar tools. Demonstrated experience in image processing, data analysis, and geospatial modeling. Familiarity with various types of geospatial data sources, including satellite and aerial imagery. Excellent problem-solving skills and attention to detail. Effective communication skills, both written and verbal. Ability to work independently and as part of a team. Strong organizational skills and the ability to manage and prioritize multiple tasks. Experience: 5+ years in image processing, GIS mapping, and related works. Location: Hyderabad Salary: Competitive, commensurate with qualifications and experience. How to Apply: Interested candidates who meet the qualifications are invited to join our team as a Remote Sensing and GIS Specialist. . Please include RSGIS Application [Your Name] in the email subject line. Deadline for Application: 30 days from the date of posting We are an equal opportunity employer and welcome applications from candidates of all backgrounds. Join our team in Hyderabad and contribute your expertise to exciting geophysical projects. Your skills and commitment will play a key role in advancing our mission. Apply today!
Posted 5 days ago
4.0 - 6.0 years
5 - 9 Lacs
Ahmedabad, Bengaluru
Work from Office
Sr Software Engineer at PierSight | Jobs at PierSight Bangalore / Ahmedabad, India As per industry standards February 21st, 2025 Role: Sr Software Engineer Industry Type: Space Technology Location: Ahmedabad / Bangalore Employment Type: Full-time Job Description: Are you ready to join the pioneering team at PierSight SpaceWere a Space-Tech company with teams in Ahmedabad, California and Bangalore on a mission to build the worlds largest constellation of Synthetic Aperture Radar and AIS satellites for comprehensive ocean surveillance. With backing from prestigious institutional investors like Alphawave Global, Elevation Capital, All in Capital, and Techstars, were set to make a significant impact. We are seeking a highly skilled and experienced Software Engineer to lead the development of our Maritime Tracking and Analysis software. This role requires a deep understanding of full-stack web development, geospatial data, Docker, Kubernetes, and product management. Cloud certifications are a plus. Key Responsibilities: Architect the full Maritime Analysis and Tracking System, ensuring it meets business needs and technical requirements. Manage the product development lifecycle, from initial concept to final delivery. Lead and manage a team of developers, providing technical guidance and support. Ensure the product is delivered on time, within scope, and meets quality standards. Collaborate with cross-functional teams, including engineering, design, and marketing, to ensure seamless integration and alignment with business objectives. Create detailed product specifications and technical documentation. Make decisions on technology stacks and architectural patterns. Oversee the implementation of product features and ensure quality control. Key Skills and Qualifications: 4-6 years of full-stack web development experience. Strong understanding of building and managing geospatial data. Proficiency in Docker and Kubernetes. Solid understanding of product management principles. Cloud certifications (AWS, Azure, Google Cloud) are a plus. Excellent problem-solving capabilities and analytical thinking. Strong communication and leadership abilities. Ability to work effectively in a fast-paced, dynamic environment. Preferred Qualifications: Experience in maritime or geospatial software development. Knowledge of cloud-based image production pipelines and data processing workflows. Why Join Us: Be part of a cutting-edge technology startup with a mission to revolutionize maritime surveillance. Work with a dynamic and passionate team of professionals. Opportunity to lead and shape the development of innovative software solutions. Competitive salary and benefits package.
Posted 5 days ago
6.0 - 11.0 years
8 - 13 Lacs
Hyderabad
Work from Office
Job Description You will be a part of our Data Engineering team and will be focused on delivering exceptional results for our clients. A large portion of your time will be in the weeds working alongside your team architecture, designing, implementing, and optimizing data solutions. You ll work with the team to deliver, migrate and/or scale cloud data solutions; build pipelines and scalable analytic tools using leading technologies including AWS, Azure, GCP, Spark, Hadoop, etc. What you ll be doing? Develop data pipelines to move and transform data from various sources to data warehouses Ensure the quality, reliability, and scalability of the organizations data infrastructure Optimize data processing and storage for performance and cost-effectiveness Collaborate with data scientists, analysts, and other stakeholders to understand their requirements and develop solutions to meet their needs Continuously monitor and troubleshoot data pipelines to ensure their reliability and availability Stay up to date with the latest trends and technologies in data engineering and apply them to improve our data capabilities Qualifications Bachelor s degree in computer science, Software Engineering, or a related field 6+ years of experience in data engineering or a related field Strong programming skills in Python or Scala
Posted 5 days ago
5.0 - 10.0 years
7 - 12 Lacs
Bengaluru
Work from Office
In the IBM Chief Information Office,you wi be part of a dynamic team driving the future of AI and data science in arge-scae enterprise transformations. We offer a coaborative environment where your technica expertise wi be vaued, and your professiona deveopment wi be supported. Join us to work on chaenging projects, everage the atest technoogies, and make a tangibe impact on eading organisations. As a Data Scientist within IBM's Chief Information Office, you wi support AI-driven projects across the enterprise. You wi appy your technica skis in AI, machine earning, and data anaytics to assist in impementing data-driven soutions that aign with business goas. This roe invoves working with team members to transate data insights into actionabe recommendations. Key Responsibiities: Technica Execution and Leadership: Deveop and depoy AI modes and data anaytics soutions. Support the impementation and optimisation of AI-driven strategies per business stakehoder requirements. Hep refine data-driven methodoogies for transformation projects. Data Science and AI: Design and impement machine earning soutions and statistica modes, from probem formuation through depoyment, to anayse compex datasets and generate actionabe insights. Learn and utiise coud patforms to ensure the scaabiity of AI soutions. Leverage reusabe assets and appy IBM standards for data science and deveopment. Project Support: Lead and contribute to various stages of AI and data science projects, from data exporation to mode deveopment. Monitor project timeines and hep resove technica chaenges. Design and impement measurement frameworks to benchmark AI soutions, quantifying business impact through KPIs. Coaboration: Ensure aignment to stakehoders strategic direction and tactica needs. Work with data engineers, software deveopers, and other team members to integrate AI soutions into existing systems. Contribute technica expertise to cross-functiona teams. Required education Bacheor's Degree Preferred education Master's Degree Required technica and professiona expertise Bacheors or Masters in Computer Science, Data Science, Statistics, or a reated fied is required; an advanced degree is strongy preferred Experience: 5+ yearsof experience in data science, AI, or anaytics with a focus on impementing data-driven soutions Experience with data ceaning, data anaysis, A/B testing, and data visuaization Experience with AI technoogies through coursework or projects Technica Skis: Proficiency in SQL and Python for performing data anaysis and deveoping machine earning modes Knowedge of common machine earning agorithms and frameworksinear regression, decision trees, random forests, gradient boosting (e.g., XGBoost, LightGBM), neura networks, and deep earning frameworks such as TensorFow and PyTorch Experience with coud-based patforms and data processing frameworks Understanding of arge anguage modes (LLMs) Famiiarity with IBMs watsonx product suite Famiiarity with object-oriented programming Anaytica Skis: Strong probem-soving abiities and eagerness to earn
Posted 5 days ago
2.0 - 5.0 years
14 - 17 Lacs
Mumbai
Work from Office
As a Data Engineer at IBM, you' pay a vita roe in the deveopment, design of appication, provide reguar support/guidance to project teams on compex coding, issue resoution and execution. Your primary responsibiities incude: Lead the design and construction of new soutions using the atest technoogies, aways ooking to add business vaue and meet user requirements. Strive for continuous improvements by testing the buid soution and working under an agie framework. Discover and impement the atest technoogies trends to maximize and buid creative soutions Required education Bacheor's Degree Preferred education Master's Degree Required technica and professiona expertise Experience with Apache Spark (PySpark)In-depth knowedge of Spark’s architecture, core APIs, and PySpark for distributed data processing. Big Data TechnoogiesFamiiarity with Hadoop, HDFS, Kafka, and other big data toos. Data Engineering Skis: Strong understanding of ETL pipeines, data modeing, and data warehousing concepts. Strong proficiency in PythonExpertise in Python programming with a focus on data processing and manipuation. Data Processing FrameworksKnowedge of data processing ibraries such as Pandas, NumPy. SQL ProficiencyExperience writing optimized SQL queries for arge-scae data anaysis and transformation. Coud PatformsExperience working with coud patforms ike AWS, Azure, or GCP, incuding using coud storage systems Preferred technica and professiona experience Define, drive, and impement an architecture strategy and standards for end-to-end monitoring. Partner with the rest of the technoogy teams incuding appication deveopment, enterprise architecture, testing services, network engineering, Good to have detection and prevention toos for Company products and Patform and customer-facing
Posted 5 days ago
2.0 - 5.0 years
14 - 17 Lacs
Hyderabad
Work from Office
As a Data Engineer at IBM, you' pay a vita roe in the deveopment, design of appication, provide reguar support/guidance to project teams on compex coding, issue resoution and execution. Your primary responsibiities incude: Lead the design and construction of new soutions using the atest technoogies, aways ooking to add business vaue and meet user requirements. Strive for continuous improvements by testing the buid soution and working under an agie framework. Discover and impement the atest technoogies trends to maximize and buid creative soutions Required education Bacheor's Degree Preferred education Master's Degree Required technica and professiona expertise Experience with Apache Spark (PySpark)In-depth knowedge of Spark’s architecture, core APIs, and PySpark for distributed data processing. Big Data TechnoogiesFamiiarity with Hadoop, HDFS, Kafka, and other big data toos. Data Engineering Skis: Strong understanding of ETL pipeines, data modeing, and data warehousing concepts. Strong proficiency in PythonExpertise in Python programming with a focus on data processing and manipuation. Data Processing FrameworksKnowedge of data processing ibraries such as Pandas, NumPy. SQL ProficiencyExperience writing optimized SQL queries for arge-scae data anaysis and transformation. Coud PatformsExperience working with coud patforms ike AWS, Azure, or GCP, incuding using coud storage systems Preferred technica and professiona experience Define, drive, and impement an architecture strategy and standards for end-to-end monitoring. Partner with the rest of the technoogy teams incuding appication deveopment, enterprise architecture, testing services, network engineering, Good to have detection and prevention toos for Company products and Patform and customer-facing
Posted 5 days ago
5.0 - 10.0 years
12 - 17 Lacs
Noida
Work from Office
Spark/PySpark Technical hands on data processing Table designing knowledge using Hive - similar to RDBMS knowledge Database SQL knowledge for retrieval of data - transformation queries such as joins (full, left, right), ranking, group by Good Communication skills. Additional skills - GitHub, Jenkins, shell scripting would be added advantage Mandatory Competencies Big Data - PySpark Big Data - SPARK Big Data - Hadoop Big Data - Hive Database - SQL DevOps - Github DevOps - Jenkins DevOps - Shell Scripting Beh - Communication and collaboration At Iris Software, we offer world-class benefits designed to support the financial, health and well-being needs of our associates to help achieve harmony between their professional and personal growth. From comprehensive health insurance and competitive salaries to flexible work arrangements and ongoing learning opportunities, were committed to providing a supportive and rewarding work environment. Join us and experience the difference of working at a company that values its employees success and happiness.
Posted 5 days ago
3.0 - 8.0 years
4 - 9 Lacs
Mumbai Suburban
Work from Office
Job Title: Data Processing (DP) Executive Location: MIDC, Andheri East, Mumbai Work Mode: Work From Office (WFO) Work Days: Monday to Friday Work Hours: 9:00 PM 6:00 AM IST (Night Shift) Job Summary: We are seeking a highly skilled and detail-oriented Data Processing (DP) Executive to join our team. The ideal candidate will have a solid background in data analysis and processing, strong proficiency in industry-standard tools, and the ability to manage large data sets efficiently. This role is critical in ensuring data integrity and delivering accurate insights for business decision-making. Key Responsibilities: Manage and process data using tools like SPSS and Q programming . Perform data cleaning, transformation, and statistical analysis. Collaborate with research and analytics teams to interpret and format data for reporting. Create reports and dashboards; experience with Tableau or similar visualization tools is an advantage. Utilize SQL for data querying and validation. Ensure accuracy and consistency of data deliverables across projects. Handle multiple projects simultaneously with a keen eye for detail and timelines. Technical Skills: Proficiency in SPSS and Q programming . Strong understanding of data processing techniques and statistical methods. Familiarity with Tableau or other data visualization tools (preferred). Basic working knowledge of SQL . Educational Qualifications: Bachelor's degree in Statistics, Computer Science, Data Science , or a related field. Experience: Minimum 3 years of experience in data processing or a similar analytical role. Soft Skills: Excellent analytical and problem-solving abilities. Strong attention to detail and accuracy. Good communication skills and the ability to work in a team-oriented environment. Self-motivated with the ability to work independently and manage multiple tasks effectively.
Posted 5 days ago
2.0 - 3.0 years
2 - 2 Lacs
Chennai
Work from Office
MVH Required Data entry operator Handling large data files research data ,Data exclamation and data collection data entry and cleaning the data Red cap and X-Excel sheet. Candidates can send cv to hr@mvdiabetes.in or call 6381040749
Posted 5 days ago
2.0 - 4.0 years
2 - 6 Lacs
Gurugram
Work from Office
As a key member of the DTS team, you will primarily collaborate closely with a global leading hedge fund on data engagements. Partner with data strategy and sourcing team on data requirements to design data pipelines and delivery structures. Desired Skills and Experience Essential skills B.Tech/ M.Tech/ MCA with 2-4 years of overall experience. Skilled in Python and SQL. Experience with data modeling, data warehousing, and building data pipelines. Experience working with FTP, API, S3 and other distribution channels to source data. Experience working with financial and/or alternative data products. Experience working with cloud native tools for data processing and distribution. Experience with Snowflake and Airflow. Key Responsibilities Engage with vendors and technical teams to systematically ingest, evaluate, and create valuable data assets. Collaborate with core engineering team to create central capabilities to process, manage and distribute data assts at scale. Apply robust data quality rules to systemically qualify data deliveries and guarantee the integrity of financial datasets. Engage with technical and non-technical clients as SME on data asset offerings. Key Metrics Python, SQL. Snowflake Data Engineering and pipelines Behavioral Competencies Good communication (verbal and written) Experience in managing client stakeholders
Posted 5 days ago
4.0 - 6.0 years
4 - 8 Lacs
Gurugram
Work from Office
Supporting client in Financial Planning and Analysis activities (FPA) including collecting revenue, headcount and cost submissions Support and actively participate in forecast and budgeting functions, data processing, review and build-up of revenue, headcount and cost excel spreadsheets Prepare and manage different reporting activities related to relevant business areas and KPIs Responsible for supporting the onshore team in preparing relevant projections on key areas and KPIs Assist in the preparation of presentations to track and analyze the performance of key areas of the business, assist in improving existing templates and flagging and documenting any lags in information provided and share suggestions Perform variance analysis (actuals vs. estimates) to determine the deviations from projected metrics and help identify areas for improvement Support on ad-hoc analysis and projects as per Client requests Contribute toward managing project timelines and quality of deliverables in a manner to ensure high client satisfaction Demonstrate strength and flair in client/requester relationship building and management, information/knowledge needs assessment Key Competencies: CA/MBA/CFA 4+ years of experience in FPA domain The candidate should have the ability to work as part of the team and independently as per the requirement Excellent written and verbal communication skills Good knowledge of accounting principles, budgeting and forecasting MS Office skills should be good in MS PowerPoint, MS Excel, and MS Word
Posted 5 days ago
0.0 - 3.0 years
2 - 3 Lacs
Noida
Work from Office
EXL IS HIRING FOR BACK-OFFICE (CONTRACTUAL ROLE) PROCESS About EXL EXL Service is a global analytics and digital solutions company serving industries including insurance, healthcare, banking and financial services, media, retail, and others. The company is headquartered in New York and has more than 37,000 professionals in locations throughout the United States, Europe, Asia, Latin America, Australia and South Africa. http://www.exlservice.com ELIGIBILITY - Candidate should be a graduate (any stream). - Fresher and experience both can apply. - Candidate should be comfortable with Night shifts. - Candidates should be comfortable with Work from Office (sec- 144 NOIDA). - Notice Period - Immediate joiners preferred -B.Tech Graduates/Diploma Graduates will not be entertained Please Note-It will be a contractual period of 06 months PERKS AND BENEFITS - Salary -Freshers-2.50 LPA and experience to be offered 3.00 LPA (depending upon last drawn and experience) - 5 days working - Both Sides transport till further update (within the hiring grid) NOTE- Do not carry any electronic items like Laptop and Pen drive MANDATORY DOCUMENTS - Please carry hardcopy of Resume(02 Copies) AADHAR card, Photocopy of PAN Card and 2 recent passport Size photograph along with you. Entry would not be allowed into the premises without the above-mentioned documents. Please come b/w 11 am- 1:30 PM as entry is not allowed post 2:00 PM Regards, EXL RECRUITMENT TEAM EXL: Empowering Businesses Through Data & AI EXL is a global leader in analytics, AI, and digital solutions for all industries. Let us power your growth with generative AI and digital transformation!
Posted 5 days ago
0.0 - 4.0 years
0 - 1 Lacs
Vadodara
Work from Office
Responsibilities: * Maintain confidentiality at all times * Input data accurately using computer software * Process documents with precision * Meet deadlines consistently * Manage back office tasks efficiently
Posted 5 days ago
0.0 - 1.0 years
3 - 6 Lacs
Hyderabad
Work from Office
Job Summary : As a Clinical reporting analyst you will be integral to our mission of providing accurate and timely analysis of ECG data, contributing to the improvement of patient care and outcomes. looks forward to your contributions to our team and the impact you will make in enhancing our data processing capabilities. Join us in embracing the startup vibe of agility, open communication, and teamwork. Here, you'll thrive in an environment where learning, challenging the status quo, and unleashing your creativity are encouraged. Your voice matters, and together, we move swiftly, learn from missteps, and make meaningful impacts. Let's forge ahead, innovate, and make a difference. Come be a part of our dynamic team! Job Responsibilities: Every candidate goes through a 6 week training program. The training covers ECG Analysis training, data processing techniques and software training. Once the training completes, your primary duties will include: Sanitise and process up Beat data as per the standard process. Prepare up Beat data with appropriate highlights for further processing. Effectively communicating ECG abnormalities by notifying lead technicians and/or physicians and clinical staff as necessary. Maintaining compliance with job-specific proficiency requirement. Your specific responsibilities may change from time to time at the discretion of t Company. You will also be expected to comply with all rules, policies, and procedures of the Company, as they may be adopted and modified from time to time. Candidate Requirements: 12th grade + Diploma in cardiology or Bachelors Degree in Zoology, lifesciences. Experience as a Holter Scanner or telemetry / monitor technician will be an added advantage . Proficiency level in handling computers. Excellent attention to detail . Positive attitude and team player, ability to use critical thinking skills . Knowledge of medical terminology, specific to Cardiology and Electrophysiology. Excellent written and verbal communication skills . Strong analytical, communication, and interpersonal skills.
Posted 5 days ago
12.0 - 15.0 years
12 - 15 Lacs
Bengaluru / Bangalore, Karnataka, India
On-site
AWS experience (not Azure or GCP), with 12-15 years of experience, and hands-on expertise in design and implementation. Design and Develop Data Solutions, Design and implement efficient data processing pipelines using AWS services like AWS Glue, AWS Lambda, Amazon S3, and Amazon Redshift. Candidates should possess exceptional communication skills to engage effectively with US clients. The ideal candidate must be hands-on with significant practical experience. Availability to work overlapping US hours is essential. The contract duration is 6 months. For this role, we're looking for candidates with 12 to 15 years of experience. AWS experience communication skills
Posted 5 days ago
2.0 - 6.0 years
7 - 11 Lacs
Bengaluru
Work from Office
About The Role This is an Internal document. Job TitleSenior Data Engineer About The Role As a Senior Data Engineer, you will play a key role in designing and implementing data solutions @Kotak811. — You will be responsible for leading data engineering projects, mentoring junior team members, and collaborating with cross-functional teams to deliver high-quality and scalable data infrastructure. — Your expertise in data architecture, performance optimization, and data integration will be instrumental in driving the success of our data initiatives. Responsibilities 1. Data Architecture and Designa. Design and develop scalable, high-performance data architecture and data models. b. Collaborate with data scientists, architects, and business stakeholders to understand data requirements and design optimal data solutions. c. Evaluate and select appropriate technologies, tools, and frameworks for data engineering projects. d. Define and enforce data engineering best practices, standards, and guidelines. 2. Data Pipeline Development & Maintenancea. Develop and maintain robust and scalable data pipelines for data ingestion, transformation, and loading for real-time and batch-use-cases b. Implement ETL processes to integrate data from various sources into data storage systems. c. Optimise data pipelines for performance, scalability, and reliability. i. Identify and resolve performance bottlenecks in data pipelines and analytical systems. ii. Monitor and analyse system performance metrics, identifying areas for improvement and implementing solutions. iii. Optimise database performance, including query tuning, indexing, and partitioning strategies. d. Implement real-time and batch data processing solutions. 3. Data Quality and Governancea. Implement data quality frameworks and processes to ensure high data integrity and consistency. b. Design and enforce data management policies and standards. c. Develop and maintain documentation, data dictionaries, and metadata repositories. d. Conduct data profiling and analysis to identify data quality issues and implement remediation strategies. 4. ML Models Deployment & Management (is a plus) This is an Internal document. a. Responsible for designing, developing, and maintaining the infrastructure and processes necessary for deploying and managing machine learning models in production environments b. Implement model deployment strategies, including containerization and orchestration using tools like Docker and Kubernetes. c. Optimise model performance and latency for real-time inference in consumer applications. d. Collaborate with DevOps teams to implement continuous integration and continuous deployment (CI/CD) processes for model deployment. e. Monitor and troubleshoot deployed models, proactively identifying and resolving performance or data-related issues. f. Implement monitoring and logging solutions to track model performance, data drift, and system health. 5. Team Leadership and Mentorshipa. Lead data engineering projects, providing technical guidance and expertise to team members. i. Conduct code reviews and ensure adherence to coding standards and best practices. b. Mentor and coach junior data engineers, fostering their professional growth and development. c. Collaborate with cross-functional teams, including data scientists, software engineers, and business analysts, to drive successful project outcomes. d. Stay abreast of emerging technologies, trends, and best practices in data engineering and share knowledge within the team. i. Participate in the evaluation and selection of data engineering tools and technologies. Qualifications1. 3-5 years"™ experience with Bachelor's Degree in Computer Science, Engineering, Technology or related field required 2. Good understanding of streaming technologies like Kafka, Spark Streaming. 3. Experience with Enterprise Business Intelligence Platform/Data platform sizing, tuning, optimization and system landscape integration in large-scale, enterprise deployments. 4. Proficiency in one of the programming language preferably Java, Scala or Python 5. Good knowledge of Agile, SDLC/CICD practices and tools 6. Must have proven experience with Hadoop, Mapreduce, Hive, Spark, Scala programming. Must have in-depth knowledge of performance tuning/optimizing data processing jobs, debugging time consuming jobs. 7. Proven experience in development of conceptual, logical, and physical data models for Hadoop, relational, EDW (enterprise data warehouse) and OLAP database solutions. 8. Good understanding of distributed systems 9. Experience working extensively in multi-petabyte DW environment 10. Experience in engineering large-scale systems in a product environment
Posted 6 days ago
5.0 - 8.0 years
12 - 22 Lacs
Pune, Maharashtra
Hybrid
Job Role You'll be responsible for Hands on experience on Data warehousing tools and methodologies. Design and manage scalable infrastructure on Google Cloud Platform (GCP) to support various application and data workloads. Implement and manage IAM policies, roles, and permissions to ensure secure access across GCP services. Build and optimize workflows using Cloud Composer (Airflow) and manage data processing pipelines via Dataproc. Provision and maintain Compute Engine VMs and integrate them into broader system architectures. Set up and query data in BigQuery, and manage data flows securely and efficiently. Develop and maintain CI/CD pipelines using Argo CD, Jenkins, or GitOps methodologies. Administer Kubernetes clusters (GKE) including node scaling, workload deployments, and Helm chart management. Create and maintain YAML files for defining infrastructure as a code. Monitor system health and performance using tools like Prometheus, Grafana, and GCPs native monitoring stack. Troubleshoot infrastructure issues, perform root cause analysis, and implement preventative measures. Collaborate with development teams to integrate infrastructure best practices and support application delivery. Document infrastructure standards, deployment processes, and operational procedures. Participate in Agile ceremonies, contributing to sprint planning, daily stand-ups, and retrospectives
Posted 6 days ago
5.0 - 9.0 years
8 - 12 Lacs
Bengaluru
Work from Office
Educational Requirements MCA,MSc,Bachelor of Engineering,BBA,BCom Service Line Data & Analytics Unit Responsibilities 1. 5-8 yrs exp in Azure (Hands on experience in Azure Data bricks and Azure Data Factory) 2. Good knowledge in SQL, PySpark. 3. Should have knowledge in Medallion architecture pattern 4. Knowledge on Integration Runtime 5. Knowledge on different ways of scheduling jobs via ADF (Event/Schedule etc) 6. Should have knowledge of AAS, Cubes. 7. To create, manage and optimize the Cube processing. 8. Good Communication Skills. 9. Experience in leading a team Additional Responsibilities: Good knowledge on software configuration management systems Strong business acumen, strategy and cross-industry thought leadership Awareness of latest technologies and Industry trends Logical thinking and problem solving skills along with an ability to collaborate Two or three industry domain knowledge Understanding of the financial processes for various types of projects and the various pricing models available Client Interfacing skills Knowledge of SDLC and agile methodologies Project and Team management Preferred Skills: Technology->Big Data - Data Processing->Spark
Posted 6 days ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
17062 Jobs | Dublin
Wipro
9393 Jobs | Bengaluru
EY
7759 Jobs | London
Amazon
6056 Jobs | Seattle,WA
Accenture in India
6037 Jobs | Dublin 2
Uplers
5971 Jobs | Ahmedabad
Oracle
5764 Jobs | Redwood City
IBM
5714 Jobs | Armonk
Tata Consultancy Services
3524 Jobs | Thane
Capgemini
3518 Jobs | Paris,France