Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
3.0 - 6.0 years
5 - 9 Lacs
Chennai
Work from Office
We are looking for a skilled Hadoop Developer with 3 to 6 years of experience to join our team at IDESLABS PRIVATE LIMITED. The ideal candidate will have expertise in developing and implementing big data solutions using Hadoop technologies. Roles and Responsibility Design, develop, and deploy scalable big data applications using Hadoop. Collaborate with cross-functional teams to identify business requirements and develop solutions. Develop and maintain large-scale data processing systems using Hadoop MapReduce. Troubleshoot and optimize performance issues in existing Hadoop applications. Participate in code reviews to ensure high-quality code standards. Stay updated with the latest trends and technologies in big data development. Job Requirements Strong understanding of Hadoop ecosystem including HDFS, YARN, and Oozie. Experience with programming languages such as Java or Python. Knowledge of database management systems such as MySQL or NoSQL. Familiarity with agile development methodologies and version control systems like Git. Excellent problem-solving skills and attention to detail. Ability to work collaboratively in a team environment and communicate effectively with stakeholders.
Posted 1 day ago
5.0 - 8.0 years
6 - 10 Lacs
Hyderabad
Work from Office
We are looking for a skilled Senior Data Engineer with 5-8 years of experience to join our team at IDESLABS PRIVATE LIMITED. The ideal candidate will have a strong background in data engineering and excellent problem-solving skills. Roles and Responsibility Design, develop, and implement large-scale data pipelines and architectures. Collaborate with cross-functional teams to identify and prioritize project requirements. Develop and maintain complex data systems and databases. Ensure data quality, integrity, and security. Optimize data processing workflows for improved performance and efficiency. Troubleshoot and resolve technical issues related to data engineering. Job Requirements Strong knowledge of data engineering principles and practices. Experience with data modeling, database design, and data warehousing. Proficiency in programming languages such as Python, Java, or C++. Excellent problem-solving skills and attention to detail. Ability to work collaboratively in a team environment. Strong communication and interpersonal skills.
Posted 1 day ago
5.0 - 7.0 years
7 - 9 Lacs
Gurugram
Work from Office
Practice Overview PracticeData and Analytics (DNA) - Analytics Consulting Role Associate Director - Data & Analytics.LocationGurugram, India At Oliver Wyman DNA, we partner with clients to solve tough strategic business challenges with the power of analytics, technology, and industry expertise. We drive digital transformation, create customer-focused solutions, and optimize operations for the future. Our goal is to achieve lasting results in collaboration with our clients and stakeholders. We value and offer opportunities for personal and professional growth. Join our entrepreneurial team focused on delivering impact globally. Our Mission and Purpose Mission: Leverage Indias high-quality talent to provide exceptional analytics-driven management consulting services that empower clients globally to achieve their business goals and drive sustainable growth, by working alongside Oliver Wyman consulting teams. Purpose Our purpose is to bring together a diverse team of highest-quality talent, equipped with innovative analytical tools and techniques to deliver insights that drive meaningful impact for our global client base. We strive to build long-lasting partnerships with clients based on trust, mutual respect, and a commitment to deliver results. We aim to build a dynamic and inclusive organization that attracts and retains the top analytics talent in India and provides opportunities for professional growth and development. Our goal is to provide a sustainable work environment while fostering a culture of innovation and continuous learning for our team members. The Role and Responsibilities We are looking to hire an Associate Director in Data Science & Data Engineering Track. We seek individuals with relevant prior experience in quantitatively intense areas to join our team. Youll be working with varied and diverse teams to deliver unique and unprecedented solutions across all industries. In the data scientist track, you will be primarily responsible for managing and delivering analytics projects and helping teams design analytics solutions and models that consistently drive scalable high-quality solutions. In the data engineering track, you will be primarily responsible for developing and monitoring high-performance applications that can rapidly deploy latest machine learning frameworks and other advanced analytical techniques at scale. This role requires you to be a proactive learner and quickly pick up new technologies, whenever required. Most of the projects require handling big data, so you will be required to work on related technologies extensively. You will work closely with other team members to support project delivery and ensure client satisfaction. Your responsibilities will include Working alongside Oliver Wyman consulting teams and partners, engaging directly with global clients to understand their business challenges Exploring large-scale data and crafting models to answer core business problems Working with partners and principals to shape proposals that showcase our data science and analytics capabilities Explaining, refining, and crafting model insights and architecture to guide stakeholders through the journey of model building Advocating best practices in modelling and code hygiene Leading the development of proprietary statistical techniques, ML algorithms, assets, and analytical tools on varied projects Travelling to clients locations across the globe, when required, understanding their problems, and delivering appropriate solutions in collaboration with them Keeping up with emerging state-of-the-art modelling and data science techniques in your domain Your Attributes, Experience & Qualifications Bachelor's or Masters degree in a quantitative discipline from a top academic program (Data Science, Mathematics, Statistics, Computer Science, Informatics, and Engineering) Prior experience in data science, machine learning, and analytics Passion for problem-solving through big-data and analytics Pragmatic and methodical approach to solutions and delivery with a focus on impact Independent worker with the ability to manage workload and meet deadlines in a fast-paced environment Impactful presentation skills that succinctly and efficiently convey findings, results, strategic insights, and implications Excellent verbal and written communication skills and complete command of English Willingness to travel Collaborative team player Respect for confidentiality Technical Background (Data Science) Proficiency in modern programming languages (Python is mandatory; SQL, R, SAS desired) and machine learning frameworks (e.g., Scikit-Learn, TensorFlow, Keras/Theano, Torch, Caffe, MxNet) Prior experience in designing and deploying large-scale technical solutions leveraging analytics Solid foundational knowledge of the mathematical and statistical principles of data science Familiarity with cloud storage, handling big data, and computational frameworks Valued but not required : Compelling side projects or contributions to the Open-Source community Experience presenting at data science conferences and connections within the data science community Interest/background in Financial Services in particular, as well as other sectors where Oliver Wyman has a strategic presence Technical Background (Data Engineering) Prior experience in designing and deploying large-scale technical solutions Fluency in modern programming languages (Python is mandatory; R, SAS desired) Experience with AWS/Azure/Google Cloud, including familiarity with services such as S3, EC2, Lambda, Glue Strong SQL skills and experience with relational databases such as MySQL, PostgreSQL, or Oracle Experience with big data tools like Hadoop, Spark, Kafka Demonstrated knowledge of data structures and algorithms Familiarity with version control systems like GitHub or Bitbucket Familiarity with modern storage and computational frameworks Basic understanding of agile methodologies such as CI/CD, Applicant Resiliency, and Security Valued but not required Compelling side projects or contributions to the Open-Source community Prior experience with machine learning frameworks (e.g., Scikit-Learn, TensorFlow, Keras/Theano, Torch, Caffe, MxNet) Familiarity with containerization technologies, such as Docker and Kubernetes Experience with UI development using frameworks such as Angular, VUE, or React Experience with NoSQL databases such as MongoDB or Cassandra Experience presenting at data science conferences and connections within the data science community Interest/background in Financial Services in particular, as well as other sectors where Oliver Wyman has a strategic presence Interview Process The application process will include testing technical proficiency, case study, and team-fit interviews. Please include a brief note introducing yourself, what youre looking for when applying for the role, and your potential value-add to our team. Roles and levels In addition to the base salary, this position may be eligible for performance-based incentives. We offer a competitive total rewards package that includes comprehensive health and welfare benefits as well as employee assistance programs.
Posted 1 day ago
5.0 - 8.0 years
7 - 11 Lacs
Hyderabad
Work from Office
We are looking for a skilled Big Data professional with 4 to 9 years of experience to join our team at IDESLABS PRIVATE LIMITED. The ideal candidate will have a strong background in big data and excellent analytical skills. Roles and Responsibility Design, develop, and implement big data solutions using various technologies. Collaborate with cross-functional teams to identify business problems and develop solutions. Develop and maintain large-scale data systems and architectures. Analyze complex data sets to extract insights and trends. Implement data quality checks and ensure data integrity. Stay updated with industry trends and emerging technologies in big data. Job Requirements Strong understanding of big data concepts and technologies. Excellent analytical and problem-solving skills. Ability to work in a fast-paced environment and meet deadlines. Strong communication and collaboration skills. Experience with big data tools and frameworks such as Hadoop, Spark, and NoSQL databases. Strong attention to detail and ability to deliver high-quality results.
Posted 1 day ago
8.0 - 13.0 years
16 - 20 Lacs
Bengaluru
Work from Office
About the Role Tech lead with BI and analytics experience. Hands-on with GenAI code assist tools and API development. Responsibilities Tech lead with BI and analytics experience. Hands-on with GenAI code assist tools and API development. Qualifications Experience: 8-15 Years Required Skills Python Clickhouse Databricks Azure Cloud Containerization technologies
Posted 1 day ago
2.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Company – HEAL Software Location – Bangalore Relevant Work Experience – 2+ Years Education Level – BS in CS/EE/CE or related field from a top institution Description HEAL Software is a renowned provider of AIOps (Artificial Intelligence for IT Operations) solutions. With the state-of-the-art AIOps solutions, HEAL Software consistently drives digital transformation and delivers significant value to businesses across diverse industries. HEAL Software’s unwavering dedication to leveraging AI and automation, empowers IT teams to address IT challenges, enhance incident management, reduce downtime, and ensure seamless IT operations. Through the analysis of extensive data, our solutions provide real-time insights, predictive analytics, and automated remediation, thereby enabling proactive monitoring and solution recommendation. HEAL Software Inc. has headquarters in Santa Clara, California in the US. Read more at healsoftware.ai and follow us on Twitter and on LinkedIn. Our Investors HEAL Software is backed by Avataar Ventures Partners Our Clients Our clients include banks, telecom, IT, and mid- sized Enterprises across the globe. We are predominantly working with Banking clients Top Private & Public Sector Banks in India and Overseas (Middle east, Africa, South east regions). We have over 50 happy customers and we plan to increase our customer base to 500+ in the next 24 to 36 months. Why Choose US We have patented ML techniques to find right performance insights. Our methodologies are unique in finding resource utilizations based on various workload patterns – Physical / Virtual / Containers / Cloud. We have the World’s only systematic computed Capacity chokepoint analysis solution. Our Product has a unique way of identifying discretionary and non-discretionary workloads. We have over a decade experience in providing visibility and predictions on application performances. Roles And Responsibilities Design, develop, and maintain Java-based applications, ensuring optimal performance, reliability, and scalability. Write clean, efficient, and well-documented code following industry best practices and coding standards. Participate in the entire software development lifecycle, including requirements analysis, design, implementation, testing, deployment, and support. Collaborate with product managers, business analysts, and other stakeholders to understand requirements and translate them into technical solutions. Conduct code reviews, provide constructive feedback, and mentor junior team members to promote continuous improvement and knowledge sharing. Troubleshoot and debug issues reported by clients or detected during testing, and implement timely and effective solutions. Work closely with QA engineers to ensure the quality of software deliverables through thorough testing and validation. Contribute to architectural design discussions and decisions and participate in team technical discussions. Collaborate with DevOps engineers to automate deployment processes and enhance system monitoring and performance optimization. Skill Requirement The ideal candidate should have passion for building products, solving problems, and building data pipeline. Proficiency in version 8 and higher Java. Experience in Clojure, Scala or Java, knowledge of Spark, Flink. The basics must be very strong – design, coding, testing, and debugging skills. Proficiency in web application development using Java-based technologies (Servlets, JSP, etc.). Familiarity with relational databases (e.g., MySQL, PostgreSQL) and knowledge in SQL. BS in CS/EE/CE or related field from a top institution 2+ years hands-on experience in Java, data structures and algorithms on Linux Experience/knowledge with Microservices, Docker, Kubernetes, agile methodologies and tools (e.g., Scrum, JIRA) experience is a plus Familiarity with cloud platforms (e.g., AWS, Azure) and microservices architecture is desirable. A demonstrable understanding of software development concepts, problem break down, project management, and good communication. Experience will product build life cycle of developing, debugging, optimizing and maintaining code.
Posted 1 day ago
5.0 - 9.0 years
2 - 3 Lacs
Ahmedabad
Work from Office
*instinctools is a software development company that provides custom software solutions for businesses of all sizes. Our team works closely with clients to understand their specific needs and provide personalized solutions that meet their business requirements. *instinctools is looking for a Senior Data Engineer for one of our clients. Our Client is one of the TOP-5 global management consulting firms considered to be among the most prestigious ones in the world. Hundreds of customers from Fortune-500 , including the largest global financial institutions, the worlds top media companies, technology companies and federal government agencies rely on our Clients proven platform and services. Project is a dynamic solution empowering companies to optimize promotional activities for maximum impact. It collects and validates data, analyzes promotion effectiveness, plans calendars, and integrates seamlessly with existing systems. The tool enhances vendor collaboration, negotiates better deals, and employs machine learning to optimize promotional plans, enabling companies to make informed decisions and maximize return on investment. Stack on the project: Databricks, SQL, and Spark, AWS, Python. Tasks: Build and optimize data pipelines using Databricks, SQL, and Apache Spark. Design and implement scalable data processing systems. Manage and optimize data pipelines. Ensure the quality and efficiency of data flows. Our expectations of the ideal candidate: 5+ years of experience as a Data Engineer. Deep expertise in big data technologies, particularly Databricks, SQL, and Spark. Very strong SQL skills. Experience in data modeling and ETL processes. Experience with analytics engineering is a plus. Experience with DBT, AWS, Python. Soft Skills: Prefer problem solving style over experience Ability to clarify requirements with the customer Willingness to pair with other engineers when solving complex issues Good communication skills English: Upper-Intermediate or higher We offer: flexible working time (from Indian location) professional and ambitious team learning opportunities, seminars and conferences and time for exploring new technologies co-funding for language courses (English)
Posted 1 day ago
11.0 - 13.0 years
35 - 50 Lacs
Bengaluru
Work from Office
Principal AWS Data Engineer Location : Bangalore Experience : 9 - 12 years Job Summary: In this key leadership role, you will lead the development of foundational components for a Lakehouse architecture on AWS and drive the migration of existing data processing workflows to the new Lakehouse solution. You will work across the Data Engineering organisation to design and implement scalable data infrastructure and processes using technologies such as Python, PySpark, EMR Serverless, Iceberg, Glue and Glue Data Catalog. The main goal of this position is to ensure successful migration and establish robust data quality governance across the new platform, enabling reliable and efficient data processing. Success in this role requires deep technical expertise, exceptional problem-solving skills, and the ability to lead and mentor within an agile team. Must Have Tech Skills: Prior Principal Engineer experience, leading team best practices in design, development, and implementation, mentoring team members, and fostering a culture of continuous learning and innovation Extensive experience in software architecture and solution design, including microservices, distributed systems, and cloud-native architectures. Expert in Python and Spark, with a deep focus on ETL data processing and data engineering practices. Deep technical knowledge of AWS data services and engineering practices, with demonstrable experience of implementing data pipelines using tools like EMR, AWS Glue, AWS Lambda, AWS Step Functions, API Gateway, Athena Experience of delivering Lakehouse solutions/architectures Nice To Have Tech Skills: Knowledge of additional programming languages and development tools to provide flexibility and adaptability across varied data engineering projects A master’s degree or relevant certifications (e.g., AWS Certified Solutions Architect, Certified Data Analytics) is advantageous Key Accountabilities: Lead complex projects autonomously, fostering an inclusive and open culture within development teams. Mentor team members and lead technical discussions. Provides strategic guidance on best practices in design, development, and implementation. Leads the development of high-quality, efficient code and develops necessary tools and applications to address complex business needs Collaborates closely with architects, Product Owners, and Dev team members to decompose solutions into Epics, leading the design and planning of these components. Drive the migration of existing data processing workflows to a Lakehouse architecture, leveraging Iceberg capabilities. Serves as an internal subject matter expert in software development, advising stakeholders on best practices in design, development, and implementation Key Skills: Deep technical knowledge of data engineering solutions and practices. Expert in AWS services and cloud solutions, particularly as they pertain to data engineering practices Extensive experience in software architecture and solution design Specialized expertise in Python and Spark Ability to provide technical direction, set high standards for code quality and optimize performance in data-intensive environments. Skilled in leveraging automation tools and Continuous Integration/Continuous Deployment (CI/CD) pipelines to streamline development, testing, and deployment. Exceptional communicator who can translate complex technical concepts for diverse stakeholders, including engineers, product managers, and senior executives. Provides thought leadership within the engineering team, setting high standards for quality, efficiency, and collaboration. Experienced in mentoring engineers, guiding them in advanced coding practices, architecture, and strategic problem-solving to enhance team capabilities. Educational Background: Bachelor’s degree in computer science, Software Engineering, or a related field is essential. Bonus Skills: Financial Services expertise preferred, working with Equity and Fixed Income asset classes and a working knowledge of Indices.
Posted 1 day ago
6.0 - 11.0 years
16 - 31 Lacs
Kolkata, Hyderabad, Bengaluru
Work from Office
• Hadoop, Spark, Kafka, Airflow and Any Database Exp • SQL- Basic + Advanced SQL • Programming -Python (or) Scala • Good to Have- Data visualization added advantage
Posted 1 day ago
6.0 - 7.0 years
27 - 42 Lacs
Bengaluru
Work from Office
Job Summary: Experience : 5 - 8 Years Location : Bangalore Contribute to building state-of-the-art data platforms in AWS, leveraging Python and Spark. Be part of a dynamic team, building data solutions in a supportive and hybrid work environment. This role is ideal for an experienced data engineer looking to step into a leadership position while remaining hands-on with cutting-edge technologies. You will design, implement, and optimize ETL workflows using Python and Spark, contributing to our robust data Lakehouse architecture on AWS. Success in this role requires technical expertise, strong problem-solving skills, and the ability to collaborate effectively within an agile team. Must Have Tech Skills: Demonstrable experience as a senior data engineer. Expert in Python and Spark, with a deep focus on ETL data processing and data engineering practices. Experience of implementing data pipelines using tools like EMR, AWS Glue, AWS Lambda, AWS Step Functions, API Gateway, Athena Experience with data services in Lakehouse architecture. Good background and proven experience of data modelling for data platforms Nice To Have Tech Skills: A master’s degree or relevant certifications (e.g., AWS Certified Solutions Architect, Certified Data Analytics) is advantageous Key Accountabilities: Provides guidance on best practices in design, development, and implementation, ensuring solutions meet business requirements and technical standards. Works closely with architects, Product Owners, and Dev team members to decompose solutions into Epics, leading design and planning of these components. Drive the migration of existing data processing workflows to the Lakehouse architecture, leveraging Iceberg capabilities. Communicates complex technical information clearly, tailoring messages to the appropriate audience to ensure alignment. Key Skills: Deep technical knowledge of data engineering solutions and practices. Implementation of data pipelines using AWS data services and Lakehouse capabilities. Highly proficient in Python, Spark and familiar with a variety of development technologies. Skilled in decomposing solutions into components (Epics, stories) to streamline development. Proficient in creating clear, comprehensive documentation. Proficient in quality assurance practices, including code reviews, automated testing, and best practices for data validation. Previous Financial Services experience delivering data solutions against financial and market reference data. Solid grasp of Data Governance and Data Management concepts, including metadata management, master data management, and data quality. Educational Background: Bachelor’s degree in computer science, Software Engineering, or related field essential. Bonus Skills: A working knowledge of Indices, Index construction and Asset Management principles.
Posted 1 day ago
8.0 - 10.0 years
27 - 42 Lacs
Bengaluru
Work from Office
Job Summary: Experience : 4 - 8 years Location : Bangalore The Data Engineer will contribute to building state-of-the-art data Lakehouse platforms in AWS, leveraging Python and Spark. You will be part of a dynamic team, building innovative and scalable data solutions in a supportive and hybrid work environment. You will design, implement, and optimize workflows using Python and Spark, contributing to our robust data Lakehouse architecture on AWS. Success in this role requires previous experience of building data products using AWS services, familiarity with Python and Spark, problem-solving skills, and the ability to collaborate effectively within an agile team. Must Have Tech Skills: Demonstrable previous experience as a data engineer. Technical knowledge of data engineering solutions and practices. Implementation of data pipelines using tools like EMR, AWS Glue, AWS Lambda, AWS Step Functions, API Gateway, Athena Proficient in Python and Spark, with a focus on ETL data processing and data engineering practices. Nice To Have Tech Skills: Familiar with data services in a Lakehouse architecture. Familiar with technical design practices, allowing for the creation of scalable, reliable data products that meet both technical and business requirements A master’s degree or relevant certifications (e.g., AWS Certified Solutions Architect, Certified Data Analytics) is advantageous Key Accountabilities: Writes high quality code, ensuring solutions meet business requirements and technical standards. Works with architects, Product Owners, and Development leads to decompose solutions into Epics, assisting the design and planning of these components. Creates clear, comprehensive technical documentation that supports knowledge sharing and compliance. Experience in decomposing solutions into components (Epics, stories) to streamline development. Actively contributes to technical discussions, supporting a culture of continuous learning and innovation. Key Skills: Proficient in Python and familiar with a variety of development technologies. Previous experience of implementing data pipelines, including use of ETL tools to streamline data ingestion, transformation, and loading. Solid understanding of AWS services and cloud solutions, particularly as they pertain to data engineering practices. Familiar with AWS solutions including IAM, Step Functions, Glue, Lambda, RDS, SQS, API Gateway, Athena. Proficient in quality assurance practices, including code reviews, automated testing, and best practices for data validation. Experienced in Agile development, including sprint planning, reviews, and retrospectives Educational Background: Bachelor’s degree in computer science, Software Engineering, or related essential. Bonus Skills: Financial Services expertise preferred, working with Equity and Fixed Income asset classes and a working knowledge of Indices. Familiar with implementing and optimizing CI/CD pipelines. Understands the processes that enable rapid, reliable releases, minimizing manual effort and supporting agile development cycles.
Posted 1 day ago
4.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
The Experience plus Devices (E+D) Growth team is seeking engineers to help accelerate the adoption of Copilot and Microsoft 365. Our team is uniquely positioned at the strategic epicenter of E+D for revolutionizing the productivity business by delivering embedded experiences across the Microsoft 365 suite (Teams, Outlook, Word, PowerPoint, Excel, etc.) that drive the growth of our company’s cutting-edge generative AI solutions across the commercial and consumer spectrum. Our team tackles technical challenges across a diverse tech stack, with the solutions we deliver having a direct impact on the bottom line of the business. This role requires strategic and creative thinking, as well as a passion for building technical solutions that address customer needs. We are a modern engineering organization that embodies industry best practices in Product-Led Growth (PLG). We are data-informed, hypothesis-driven, and rigorous in measuring outcomes to ensure undeniable customer and business impact. We collaborate closely with industry-leading PMs, designers, data scientists, user researchers, and marketers to build deep customer insights that inform the design of experiences used by hundreds of millions of people every day. We partner with teams across the company to deliver world-class services, and we create experiences that connect with customers across Microsoft products. We play a direct role in driving business growth and framing our business value to end users and our vibrant community of fans. We are looking for a Software Engineer to join us. Building a successful team involves creating an inclusive workplace where all people and ideas are welcome. We invest in the health of our team and take "how we work" as seriously as the impact we have on customers and the business. You don’t need to know everything when you join our team; just bring your growth mindset and willingness to learn, and we will provide mentorship and career growth to help you succeed. The Team's Focus Is On Following 3 Main Areas Full-Stack: Develop end-to-end features using React/TypeScript frontends paired with C#/.NET REST or GraphQL APIs and scalable data models—built to support millions of daily users. Backend: Design and scale Azure-hosted services in C#/.NET that handle hundreds of millions of requests daily, with opportunities to lead the adoption of event-driven architectures. Client: Deliver high-performance native Android/iOS/Windows/macOS experiences using modern C++, Java, Kotlin, Objective-C, Swift and platform-specific frameworks, enabling contextual Product-Led Growth motions within Microsoft 365 apps. As a Software Engineer 2 you will play a critical role in driving the adoption and monetization of Microsoft 365 Copilot through Product-Led Growth methodologies. The position require building new experiences, running experimentation and making data driven decision to make a ship candidates. The role will provide opportunities for impact in a high growth area for E+D. Microsoft’s mission is to empower every person and every organization on the planet to achieve more. As employees we come together with a growth mindset, innovate to empower others, and collaborate to realize our shared goals. Each day we build on our values of respect, integrity, and accountability to create a culture of inclusion where everyone can thrive at work and beyond. We are builders, explorers, and connectors and we are looking for a like-minded Software Engineers who thrives on driving big ideas from spark to scale. We are looking for candidates with a growth mindset, and someone who fosters collaboration with teammates and partners. Microsoft’s mission is to empower every person and every organization on the planet to achieve more. As employees we come together with a growth mindset, innovate to empower others, and collaborate to realize our shared goals. Each day we build on our values of respect, integrity, and accountability to create a culture of inclusion where everyone can thrive at work and beyond. In alignment with our Microsoft values, we are committed to cultivating an inclusive work environment for all employees to positively impact our culture every day. Responsibilities Works with appropriate stakeholders to determine user requirements for a set of features. Contributes to the identification of dependencies, and the development of design documents for a product area with little oversight. Creates and implements code for a product, service, or feature, reusing code as applicable. Contributes to efforts to break down larger work items into smaller work items and provides estimation. Acts as a Designated Responsible Individual (DRI) working on-call to monitor system/product feature/service for degradation, downtime, or interruptions and gains approval to restore system/product/service for simple problems. Remains current in skills by investing time and effort into staying abreast of current developments that will improve the availability, reliability, efficiency, observability, and performance of products while also driving consistency in monitoring and operations at scale. Qualifications Required Qualifications: Bachelor's Degree in Computer Science or related technical field AND 4+ years technical engineering experience with coding in languages including, but not limited to, C, C++, C#, Java, JavaScript, or Python OR equivalent experience. OR Master's Degree in Computer Science or related technical field AND 2+ years technical engineering experience with coding in languages including, but not limited to, C, C++, C#, Java, JavaScript, or Python Solid understanding of software design principles and best practices. Excellent problem-solving and analytical skills. Good design, coding, debugging, teamwork, partnership and communication skills. #DPG #ExDGrowth #IDCMicrosoft Microsoft is an equal opportunity employer. Consistent with applicable law, all qualified applicants will receive consideration for employment without regard to age, ancestry, citizenship, color, family or medical care leave, gender identity or expression, genetic information, immigration status, marital status, medical condition, national origin, physical or mental disability, political affiliation, protected veteran or military status, race, ethnicity, religion, sex (including pregnancy), sexual orientation, or any other characteristic protected by applicable local laws, regulations and ordinances. If you need assistance and/or a reasonable accommodation due to a disability during the application process, read more about requesting accommodations.
Posted 1 day ago
3.0 years
8 - 30 Lacs
Gurugram, Haryana, India
On-site
Industry & Sector: Enterprise cloud consulting and data analytics services provider delivering large-scale AWS lakehouse, real-time analytics and AI solutions for global clients. Role & Responsibilities Design, build and optimise high-volume data pipelines on AWS using Spark (Scala) and Glue. Develop reusable ETL frameworks that ingest structured and semi-structured data into S3-based data lakes. Tune Spark jobs for cost, latency and scalability; implement partitioning, caching and checkpointing best practices. Collaborate with data scientists to productionise feature engineering and model-output pipelines. Automate deployment via CloudFormation or Terraform and integrate monitoring with CloudWatch and Prometheus. Champion coding standards, peer reviews and knowledge sharing across the data engineering guild. Skills & Qualifications Must-Have: 3+ years building Spark applications in Scala on AWS; hands-on with Glue, EMR, S3, IAM and Step Functions; proficient in SQL, data modelling and partitioning strategies; version control with Git and CI/CD pipelines (CodePipeline, Jenkins or similar). Preferred: Experience with Delta Lake or Iceberg table formats; knowledge of Python for orchestration tasks; exposure to streaming (Kafka, Kinesis) and near-real-time processing; certification in AWS Data Analytics or Solutions Architect. Benefits & Culture Highlights On-site, engineer-first culture with dedicated R&D sprints and tech conference sponsorship. Performance-linked bonuses and accelerated promotion paths for high impact. Collaborative workspace with wellness programs, hack days and flexible leave policy. Skills: emr,s3,data engineering,python,iceberg,sql,kafka,ci/cd,aws data engineer (spark scala),aws,codepipeline,jenkins,data modelling,git,delta lake,scala,iam,step functions,devops,spark,apache spark,kinesis,glue,partitioning strategies
Posted 1 day ago
5.0 years
6 - 24 Lacs
Chennai, Tamil Nadu, India
On-site
Teradata SQL Developer Operating at the intersection of Enterprise Data Warehousing and Financial Services analytics, we help blue-chip clients unlock actionable insights through high-volume Teradata platforms. Our on-premise engineering team builds high-performance SQL solutions that power regulatory reporting, customer 360 views, and real-time decision systems across APAC markets. Role & Responsibilities Design, develop, and optimize Teradata SQL scripts and BTEQ batches for large scale data warehouses. Implement performance tuning strategies including statistics management, index design, and partition optimization. Build robust ETL workflows integrating Informatica/Python and Unix shell to ingest structured and semi-structured data. Collaborate with data modelers and analysts to translate business logic into efficient logical and physical models. Automate deployment, scheduling, and monitoring using Teradata Viewpoint or Control-M for 24x7 stability. Provide L3 production support, root cause analysis, and knowledge transfer to client stakeholders. Skills & Qualifications Must-Have 5+ years hands-on Teradata SQL development for enterprise data warehouses. Deep expertise in BTEQ, FastLoad, MultiLoad, and TPT utilities. Solid grasp of query plan analysis, Collect Stats, and primary/secondary index design. Proficiency in Unix shell scripting and at least one ETL tool such as Informatica or Talend. Ability to debug, profile, and optimize workloads exceeding 5 TB. Preferred Exposure to Data Vault or 3NF modeling methodologies. Knowledge of Python or Spark for data transformation. Benefits & Culture On-site client engagement offering direct business impact and rapid career growth. Learning budget for advanced Teradata certifications and cloud migration skills. Collaborative, merit-based environment with hackathons and internal guilds. Work Location: India (on-site, Monday to Friday) Skills: unix shell scripting,multiload,fastload,primary/secondary index design,tpt utilities,spark,teradata sql,python,teradata,etl,bteq,query optimization,performance tuning,query plan analysis,informatica,collect stats,data modeling,data warehousing
Posted 1 day ago
3.0 years
8 - 30 Lacs
Chennai, Tamil Nadu, India
On-site
Industry & Sector: Enterprise cloud consulting and data analytics services provider delivering large-scale AWS lakehouse, real-time analytics and AI solutions for global clients. Role & Responsibilities Design, build and optimise high-volume data pipelines on AWS using Spark (Scala) and Glue. Develop reusable ETL frameworks that ingest structured and semi-structured data into S3-based data lakes. Tune Spark jobs for cost, latency and scalability; implement partitioning, caching and checkpointing best practices. Collaborate with data scientists to productionise feature engineering and model-output pipelines. Automate deployment via CloudFormation or Terraform and integrate monitoring with CloudWatch and Prometheus. Champion coding standards, peer reviews and knowledge sharing across the data engineering guild. Skills & Qualifications Must-Have: 3+ years building Spark applications in Scala on AWS; hands-on with Glue, EMR, S3, IAM and Step Functions; proficient in SQL, data modelling and partitioning strategies; version control with Git and CI/CD pipelines (CodePipeline, Jenkins or similar). Preferred: Experience with Delta Lake or Iceberg table formats; knowledge of Python for orchestration tasks; exposure to streaming (Kafka, Kinesis) and near-real-time processing; certification in AWS Data Analytics or Solutions Architect. Benefits & Culture Highlights On-site, engineer-first culture with dedicated R&D sprints and tech conference sponsorship. Performance-linked bonuses and accelerated promotion paths for high impact. Collaborative workspace with wellness programs, hack days and flexible leave policy. Skills: emr,s3,data engineering,python,iceberg,sql,kafka,ci/cd,aws data engineer (spark scala),aws,codepipeline,jenkins,data modelling,git,delta lake,scala,iam,step functions,devops,spark,apache spark,kinesis,glue,partitioning strategies
Posted 1 day ago
9.0 - 13.0 years
13 - 18 Lacs
Hyderabad
Work from Office
This role involves the development and application of engineering practice and knowledge in defining, configuring and deploying industrial digital technologies including but not limited to PLM MES for managing continuity of information across the engineering enterprise, including design, industrialization, manufacturing supply chain, and for managing the manufacturing data. - Grade Specific Focus on Digital Continuity Manufacturing. Fully competent in own area. Acts as a key contributor in a more complex critical environment. Proactively acts to understand and anticipates client needs. Manages costs and profitability for a work area. Manages own agenda to meet agreed targets. Develop plans for projects in own area. Looks beyond the immediate problem to the wider implications. Acts as a facilitator, coach and moves teams forward.
Posted 1 day ago
7.0 - 10.0 years
10 - 14 Lacs
Bengaluru
Work from Office
The engineer is expected to help setup automation and CI/CD pipelines across some of the new frameworks being setup by the Blueprints & continuous assurance squad. Our squad is working on multiple streams to improve the cloud security posture for the bank. Required skills: Strong hands-on experience and understanding on the GCP cloud. Strong experience with automation and familiarity with one or more scripting languages like python, GO, etc Knowledge and experience with any Infrastructure as code language like Terraform(preferred), Cloudformation, etc Ability to take quickly learn the frameworks and tech stack used and contribute towards the goals of the squad" Works in the area of Software Engineering, which encompasses the development, maintenance and optimization of software solutions/applications.1. Applies scientific methods to analyse and solve software engineering problems.2. He/she is responsible for the development and application of software engineering practice and knowledge, in research, design, development and maintenance.3. His/her work requires the exercise of original thought and judgement and the ability to supervise the technical and administrative work of other software engineers.4. The software engineer builds skills and expertise of his/her software engineering discipline to reach standard software engineer skills expectations for the applicable role, as defined in Professional Communities.5. The software engineer collaborates and acts as team player with other software engineers and stakeholders. - Grade Specific Is highly respected, experienced and trusted. Masters all phases of the software development lifecycle and applies innovation and industrialization. Shows a clear dedication and commitment to business objectives and responsibilities and to the group as a whole. Operates with no supervision in highly complex environments and takes responsibility for a substantial aspect of Capgemini’s activity. Is able to manage difficult and complex situations calmly and professionally. Considers ‘the bigger picture’ when making decisions and demonstrates a clear understanding of commercial and negotiating principles in less-easy situations. Focuses on developing long term partnerships with clients. Demonstrates leadership that balances business, technical and people objectives. Plays a significant part in the recruitment and development of people. Skills (competencies) Verbal Communication
Posted 1 day ago
2.0 - 6.0 years
4 - 8 Lacs
Bengaluru
Work from Office
Works in the area of Software Engineering, which encompasses the development, maintenance and optimization of software solutions/applications. 1. Applies scientific methods to analyse and solve software engineering problems. 2. He/she is responsible for the development and application of software engineering practice and knowledge, in research, design, development and maintenance. 3. His/her work requires the exercise of original thought and judgement and the ability to supervise the technical and administrative work of other software engineers. 4. The software engineer builds skills and expertise of his/her software engineering discipline to reach standard software engineer skills expectations for the applicable role, as defined in Professional Communities. 5. The software engineer collaborates and acts as team player with other software engineers and stakeholders. Works in the area of Software Engineering, which encompasses the development, maintenance and optimization of software solutions/applications.1. Applies scientific methods to analyse and solve software engineering problems.2. He/she is responsible for the development and application of software engineering practice and knowledge, in research, design, development and maintenance.3. His/her work requires the exercise of original thought and judgement and the ability to supervise the technical and administrative work of other software engineers.4. The software engineer builds skills and expertise of his/her software engineering discipline to reach standard software engineer skills expectations for the applicable role, as defined in Professional Communities.5. The software engineer collaborates and acts as team player with other software engineers and stakeholders. - Grade Specific Has very deep understanding of software development principles and technical proficiency. Masters all phases of the software development lifecycle and drives innovation and industrialization. Works on highly complex tasks and problems and drives technical decisions at a high level. Clear evidence of thought leadership in the market. Builds, educates and integrates effective teams. Focuses on developing long term partnerships with clients. Takes full responsibility for outcomes. Has strong understanding of contractual, financial and technical considerations. Exhibits strong commercial management skills. Takes a high degree of responsibility and ownership of people issues. Skills (competencies) Verbal Communication
Posted 1 day ago
7.0 - 12.0 years
14 - 18 Lacs
Kolkata
Remote
Senior DevOps Engineer Infrastructure & Platform Specialist Department: Product and Engineering Location: Remote / Kolkata, WB (On-site) Job Summary: A Senior DevOps Engineer is responsible for designing, implementing, and maintaining the operational aspects of cloud infrastructure. Their goal is to ensure high availability, scalability, performance, and security of cloud-based systems. Key Responsibilities Design and maintain scalable, reliable, and secure cloud infrastructure. Address integration challenges, data consistency. Choose appropriate cloud services (e.g., compute, storage, networking) based on business needs. Define architectural best practices and patterns (e.g., microservices, serverless, containerization). Ensure version control and repeatable deployments of infrastructure. Automate cloud operations tasks (e.g., deployments, patching, backups). Implement CI/CD pipelines using tools like Jenkins, GitHub Actions, GitLab CI, etc. Design and implement cloud monitoring and alerting systems (e.g., CloudWatch, Azure Monitor, Prometheus, Datadog, ManageEngine). Optimize performance, resource utilization, and cost across environments. Capacity planning of resources Resource planning and deployment (HW, SW, Capex). Financial forecasting. Tracking and Management of allotted budget. Cost optimization with proper architecture and open-source technologies. Ensure cloud systems follow security best practices (e.g., encryption, IAM, zero-trust principles, VAPT). Implement compliance controls (e.g., HIPAA, GDPR, ISO 27001). Conduct regular security audits and assessments. Build systems for high availability, failover, disaster recovery, and business continuity. Participate in incident response and post-mortems. Implement and manage Service Level Objectives (SLOs) and Service Level Indicators (SLIs). Work closely with development, security, and IT teams to align cloud operations with business goals. Define governance standards for cloud usage, billing, and resource tagging. Provide guidance and mentorship to DevOps and engineering teams. Keep updating infrastructure/deployment documents. Interacting with prospective customers in pre-sales meetings to showcase architecture and security layer of the product and answering questions. Key Skills & Qualifications: Technical Skills VM provisioning and infrastructure ops on AWS, GCP, or Azure. Experience with API gateways (Kong, AWS API Gateway, NGINX). Experience managing MySQL and MongoDB on self-hosted infrastructure. Operational expertise with Elasticsearch or Solr. Proficient with Kafka, RabbitMQ, or similar message brokers. Hands-on experience with Airflow, Temporal, or other workflow orchestration tools. Familiarity with Apache Spark, Flink, Confluent/Debezium or similar streaming frameworks. Strong skills in Docker, Kubernetes, and deployment automation. Experience writing IaC with Terraform, Ansible, or CloudFormation. Building and maintaining CI/CD pipelines (GitLab, GitHub Actions, Jenkins). Experience with monitoring/logging stacks like Prometheus, Grafana, ELK, or Datadog. Sound knowledge of networking fundamentals (routing, DNS, VPN, TLS/SSL, firewalls). Experience designing and managing HA/DR/BCP infrastructure. Bonus Skills Prior involvement in SOC 2 / ISO 27001 audits or documentation. Hands-on with VAPT processes especially working directly with clients or security partners. Scripting in Go, in addition to Bash/Python. Exposure to service mesh tools like Istio or Linkerd. Experience: Must have 7+ years of experience as DevOps Engineer
Posted 1 day ago
5.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
About The Position Utilizes software engineering principles to deploy and maintain fully automated data transformation pipelines that combine a large variety of storage and computation technologies to handle a distribution of data types and volumes in support of data architecture design. A Senior Data Engineer designs and oversees the entire data infrastructure, data products and data pipelines that are resilient to change, modular, flexible, scalable, reusable, and cost effective. Key Responsibilities Oversee the entire data infrastructure to ensure scalability, operation efficiency and resiliency. Mentor junior data engineers within the organization. Design, develop, and maintain data pipelines and ETL processes using Microsoft Azure services (e.g., Azure Data Factory, Azure Synapse, Azure Databricks, Azure Fabric). Utilize Azure data storage accounts for organizing and maintaining data pipeline outputs. (e.g., Azure Data Lake Storage Gen 2 & Azure Blob storage). Collaborate with data scientists, data analysts, data architects and other stakeholders to understand data requirements and deliver high-quality data solutions. Optimize data pipelines in the Azure environment for performance, scalability, and reliability. Ensure data quality and integrity through data validation techniques and frameworks. Develop and maintain documentation for data processes, configurations, and best practices. Monitor and troubleshoot data pipeline issues to ensure timely resolution. Stay current with industry trends and emerging technologies to ensure our data solutions remain cutting-edge. Manage the CI/CD process for deploying and maintaining data solutions. Required Qualifications Bachelor’s degree in Computer Science, Engineering, or a related field (or equivalent experience) and able to demonstrate high proficiency in programming fundamentals. At least 5 years of proven experience as a Data Engineer or similar role dealing with data and ETL processes. 5 - 10 years of experience Strong knowledge of Microsoft Azure services, including Azure Data Factory, Azure Synapse, Azure Databricks, Azure Blob Storage and Azure Data Lake Gen 2. Experience utilizing SQL DML to query modern RDBMS in an efficient manner (e.g., SQL Server, PostgreSQL). Strong understanding of Software Engineering principles and how they apply to Data Engineering (e.g., CI/CD, version control, testing). Experience with big data technologies (e.g., Spark). Strong problem-solving skills and attention to detail. Excellent communication and collaboration skills. Preferred Qualifications Learning agility Technical Leadership Consulting and managing business needs Strong experience in Python is preferred but experience in other languages such as Scala, Java, C#, etc is accepted. Experience building spark applications utilizing PySpark. Experience with file formats such as Parquet, Delta, Avro. Experience efficiently querying API endpoints as a data source. Understanding of the Azure environment and related services such as subscriptions, resource groups, etc. Understanding of Git workflows in software development. Using Azure DevOps pipeline and repositories to deploy and maintain solutions. Understanding of Ansible and how to use it in Azure DevOps pipelines. Chevron ENGINE supports global operations, supporting business requirements across the world. Accordingly, the work hours for employees will be aligned to support business requirements. The standard work week will be Monday to Friday. Working hours are 8:00am to 5:00pm or 1.30pm to 10.30pm. Chevron participates in E-Verify in certain locations as required by law.
Posted 1 day ago
3.0 - 5.0 years
9 - 13 Lacs
Bengaluru
Work from Office
At Allstate, great things happen when our people work together to protect families and their belongings from lifes uncertainties. And for more than 90 years our innovative drive has kept us a step ahead of our customers evolving needs. From advocating for seat belts, air bags and graduated driving laws, to being an industry leader in pricing sophistication, telematics, and, more recently, device and identity protection. This role is responsible for executing multiple tracks of work to deliver Big Data solutions enabling advanced data science and analytics. This includes working with the team on new Big Data systems for analyzing data; the coding & development of advanced analytics solutions to make/optimize business decisions and processes; integrating new tools to improve descriptive, predictive, and prescriptive analytics. This role contributes to the structured and unstructured Big Data / Data Science tools of Allstate from traditional to emerging analytics technologies and methods. The role is responsible for assisting in the selection and development of other team members. Key Responsibilities Participate in the development of moderately complex and occasionally complex technical solutions using Big Data techniques in data & analytics processes Develops innovative solutions within the team Participates in the development of moderately complex and occasionally complex prototypes and department applications that integrate Big Data and advanced analytics to make business decisions Uses new areas of Big Data technologies, (ingestion, processing, distribution) and research delivery methods that can solve business problems Understands the Big Data related problems and requirements to identify the correct technical approach Takes coaching from key team members to ensure efforts within owned tracks of work will meet their needs Executes moderately complex and occasionally complex functional work tracks for the team Partners with Allstate Technology teams on Big Data efforts Partners closely with team members on Big Data solutions for our data science community and analytic users Leverages and uses Big Data best practices / lessons learned to develop technical solutions Education 4 year Bachelors Degree (Preferred) Experience 2 or more years of experience (Preferred) Supervisory Responsibilities This job does not have supervisory duties. Education & Experience (in lieu) In lieu of the above education requirements, an equivalent combination of education and experience may be considered. Primary Skills Big Data Engineering, Big Data Systems, Big Data Technologies, Data Science, Influencing Others Shift Time Recruiter Info Annapurna Jhaajhat@allstate.com About Allstate The Allstate Corporation is one of the largest publicly held insurance providers in the United States. Ranked No. 84 in the 2023 Fortune 500 list of the largest United States corporations by total revenue, The Allstate Corporation owns and operates 18 companies in the United States, Canada, Northern Ireland, and India. Allstate India Private Limited, also known as Allstate India, is a subsidiary of The Allstate Corporation. The India talent center was set up in 2012 and operates under the corporations Good Hands promise. As it innovates operations and technology, Allstate India has evolved beyond its technology functions to be the critical strategic business services arm of the corporation. With offices in Bengaluru and Pune, the company offers expertise to the parent organizations business areas including technology and innovation, accounting and imaging services, policy administration, transformation solution design and support services, transformation of property liability service design, global operations and integration, and training and transition. Learn more about Allstate India here.
Posted 1 day ago
4.0 - 7.0 years
10 - 15 Lacs
Bengaluru
Work from Office
At Allstate, great things happen when our people work together to protect families and their belongings from lifes uncertainties. And for more than 90 years our innovative drive has kept us a step ahead of our customers evolving needs. From advocating for seat belts, air bags and graduated driving laws, to being an industry leader in pricing sophistication, telematics, and, more recently, device and identity protection. This role is responsible for driving multiple complex tracks of work to deliver Big Data solutions enabling advanced data science and analytics. This includes working with the team on new Big Data systems for analyzing data; the coding & development of advanced analytics solutions to make/optimize business decisions and processes; integrating new tools to improve descriptive, predictive, and prescriptive analytics; and discovery of new technical challenges that can be solved with existing and emerging Big Data hardware and software solutions. This role contributes to the structured and unstructured Big Data / Data Science tools of Allstate from traditional to emerging analytics technologies and methods. The role is responsible for assisting in the selection and development of other team members. Skills Primarily Scala & Spark: Strong in functional programming and big data processing using Spark... Java Proficient in Java 8+, REST API development, multithreading, and OOP concepts.Good Hands-on with MongoDB CAAS Experience with Docker, Kubernetes, and deploying containerized apps. Tools: Git, CI/CD, JSON, SBT/Maven, Agile methodologies. Key Responsibilities Uses new areas of Big Data technologies, (ingestion, processing, distribution) and research delivery methods that can solve business problems Participates in the development of complex prototypes and department applications that integrate Big Data and advanced analytics to make business decisions Supports Innovation; regularly provides new ideas to help people, process, and technology that interact with analytic ecosystem Participates in the development of complex technical solutions using Big Data techniques in data & analytics processes Influence within the team on the effectiveness of Big Data systems to solve their business problems. Leverages and uses Big Data best practices / lessons learned to develop technical solutions used for descriptive analytics, ETL, predictive modeling, and prescriptive "real time decisions" analytics Partners closely with team members on Big Data solutions for our data science community and analytic users Partners with Allstate Technology teams on Big Data efforts Education Masters Degree (Preferred) Experience 6 or more years of experience (Preferred) Primary Skills Apache Spark, Big Data, Big Data Engineering, Big Data Systems, Big Data Technologies, CasaXPS, CI/CD, Data Science, Docker (Software), Git, Influencing Others, Java, MongoDB, Multithreading, RESTful APIs, Scala (Programming Language), ScalaTest, Spring Boot Shift Time Recruiter Info rkotz@allstate.com About Allstate The Allstate Corporation is one of the largest publicly held insurance providers in the United States. Ranked No. 84 in the 2023 Fortune 500 list of the largest United States corporations by total revenue, The Allstate Corporation owns and operates 18 companies in the United States, Canada, Northern Ireland, and India. Allstate India Private Limited, also known as Allstate India, is a subsidiary of The Allstate Corporation. The India talent center was set up in 2012 and operates under the corporations Good Hands promise. As it innovates operations and technology, Allstate India has evolved beyond its technology functions to be the critical strategic business services arm of the corporation. With offices in Bengaluru and Pune, the company offers expertise to the parent organizations business areas including technology and innovation, accounting and imaging services, policy administration, transformation solution design and support services, transformation of property liability service design, global operations and integration, and training and transition. Learn more about Allstate India here.
Posted 1 day ago
8.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
About us: Netcore Cloud is a MarTech platform helping businesses design, execute, and optimize campaigns across multiple channels. With a strong focus on leveraging data, machine learning, and AI, we empower our clients to make smarter marketing decisions and deliver exceptional customer experiences. Our team is passionate about innovation and collaboration, and we are looking for a talented Lead Data Scientist to guide and grow our data science team. Role Summary: As the Lead Data Scientist, you will head our existing data science team, driving the development of advanced machine learning models and AI solutions. You will play a pivotal role in shaping our ML/AI strategy, leading projects across NLP, deep learning, predictive analytics, and recommendation systems, while ensuring alignment with business goals. Exposure to Agentic AI systems and their evolving applications will be a valuable asset in this role, as we continue to explore autonomous, goal-driven AI workflows. This role combines hands-on technical leadership with strategic decision-making to build impactful solutions for our customers. Key Responsibilities: Leadership and Team Management : Lead and mentor the existing data science engineers, fostering skill development and collaboration. Provide technical guidance, code reviews, and ensure best practices in model development and deployment. Model Development and Innovation : Design and build machine learning models for tasks like NLP, recommendation systems, customer segmentation, and predictive analytics. Research and implement state-of-the-art ML/AI techniques to solve real-world problems. Ensure models are scalable, reliable, and optimized for performance in production environments. We operate in AWS and GCP, so it’s mandatory that you have previous experience in setting up the MLOps workflow in either of the cloud service provider. Business Alignment : Collaborate with cross-functional teams (engineering, product, marketing, etc.) to identify opportunities where AI/ML can drive value. Translate business problems into data science solutions and communicate findings to stakeholders effectively. Drive data-driven decision-making to improve user engagement, conversions, and campaign outcomes. Technology and Tools : Work with large-scale datasets, ensuring data quality and scalability of solutions. Leverage cloud platforms like AWS, GCP for model training and deployment. Utilize tools and libraries such as Python, TensorFlow, PyTorch, Scikit-learn, and Spark for development. With so much innovation happening around Gen AI and LLMs, we prefer folks who have already exposed themselves to this exciting opportunity via AWS Bedrock or Google Vertex. Qualifications: Education : Master’s or PhD in Computer Science, Data Science, Mathematics, or a related field. Experience : With an industry experience of more than 8 years with 5+ years of experience in data science, and at least 2 years in a leadership role managing a strong technical team. Proven expertise in machine learning, deep learning, NLP, and recommendation systems. Hands-on experience deploying ML models in production at scale. Experience in Martech or working on customer-facing AI solutions is a plus. Technical Skills : Proficiency in Python, SQL, and ML frameworks like TensorFlow or PyTorch. Strong understanding of statistical methods, predictive modeling, and algorithm design. Familiarity with cloud-based solutions (AWS Sagemaker, GCP AI Platform, or similar). Soft Skills : Excellent communication skills to present complex ideas to both technical and non-technical stakeholders. Strong problem-solving mindset and the ability to think strategically. A passion for innovation and staying up-to-date with the latest trends in AI/ML. Why Join Us: Opportunity to work on cutting-edge AI/ML projects impacting millions of users. Be part of a collaborative, innovation-driven team in a fast-growing Martech company. Competitive salary, benefits, and a culture that values learning and growth. Location Bengaluru
Posted 1 day ago
360.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
About Us: MUFG Bank, Ltd. is Japan’s premier bank, with a global network spanning in more than 40 markets. Outside of Japan, the bank offers an extensive scope of commercial and investment banking products and services to businesses, governments, and individuals worldwide. MUFG Bank’s parent, Mitsubishi UFJ Financial Group, Inc. (MUFG) is one of the world’s leading financial groups. Headquartered in Tokyo and with over 360 years of history, the Group has about 120,000 employees and offers services including commercial banking, trust banking, securities, credit cards, consumer finance, asset management, and leasing. The Group aims to be the world’s most trusted financial group through close collaboration among our operating companies and flexibly respond to all the financial needs of our customers, serving society, and fostering shared and sustainable growth for a better world. MUFG’s shares trade on the Tokyo, Nagoya, and New York stock exchanges. MUFG Global Service Private Limited: Established in 2020, MUFG Global Service Private Limited (MGS) is 100% subsidiary of MUFG having offices in Bengaluru and Mumbai. MGS India has been set up as a Global Capability Centre / Centre of Excellence to provide support services across various functions such as IT, KYC/ AML, Credit, Operations etc. to MUFG Bank offices globally. MGS India has plans to significantly ramp-up its growth over the next 18-24 months while servicing MUFG’s global network across Americas, EMEA and Asia Pacific About the Role: Position Title: Data Engineer or Data Visualization Analyst Corporate Title: AVP Reporting to: Vice President Job Profile Position details: Cloud Data Engineer with a strong technology background and hands-on experience working in an enterprise environment, designing, and implementing data warehouses, data lakes, and data marts for large financial institutions. Alternatively, Data Visualization Analyst with BI & Analytics experience with exposure to or experience in Data Engineering or Data Pipelines can be considered. In this role you will work with technology and business leads to build or enhance critical enterprise applications both on-prem and in the cloud (AWS) along with Modern Data Platform such as Snowflake and Data Virtualization tool such as Starburst. Successful candidates will possess in-depth knowledge of current and emerging technologies and demonstrate a passion for designing and building elegant solutions and for continuous self-improvement. Roles and Responsibilities: Manage data analysis and data integration of disparate systems Create a semantic layer for data virtualization that connects to heterogenous data repositories Develop reports and dashboards using Tableau, Power BI, or similar BI tools as assigned Assist Data Management Engineering team (either for Data Pipelines Engineering or Data Service & Data Access Engineering) for ETL or BI design and other framework related items Work with business users to translate functional specifications into technical designs for implementation and deployment. Extract, transform, and load large volumes of structured and unstructured data from various sources into AWS data lakes or modern data platforms. Assist with Data Quality Controls as assigned Work with cross functional team members to develop prototypes, produce design artifacts, develop components, perform, and support SIT and UAT testing, triaging and bug fixing. Optimize and fine-tune data pipelines jobs for performance and scalability. Implement data quality and data validation processes to ensure data accuracy and integrity. Provide problem-solving expertise and complex analysis of data to develop business intelligence integration designs. Convert physical data integration models and other design specifications to source code. Ensure high quality and optimum performance of data integration systems to meet business solutions. Job Requirements: Bachelors’ Degree (or foreign equivalent degree) in Information Technology, Information Systems, Computer Science, Software Engineering, or a related field. Experience in the financial services or banking industry is preferred. 5+ Years of experience working as a Data Engineer, with a focus on building data pipelines and processing large datasets. AWS certifications on Data related specialties are a plus. Business Acumen – 15%: Knowledge of Banking & Financial Services Products (such as Loans, Deposits, Forex, etc.). Knowledge of Operational/MIS Reports, Risk and Regulatory Reporting for a US Bank is a plus. Data Skills – 25%: Must have proficiency in Data Warehousing concepts, Data Lake & Data Mesh concepts, Data Modeling, Databases, Data Governance, Data Security/Protection, and Data Access. Solid understanding of data modeling, database design, and ETL principles Familiarity with data governance, data security, and compliance practices in cloud environments. Tech Skills – 50%: 5+ Years of Expertise in Hive QL, Python programming, experience with Spark, Python, Scala and Spark for big data processing and analysis. 2+ Years of Strong proficiency in AWS services, including AWS Glue, Redshift, EMR, RDS, Kinesis, S3, Athena, DynamoDB, Step Functions and Lambda. 3+ years of experience with Data Visualization Tools such as Tableau or Power BI 2+ years of experience with ETL technologies such as Informatica Powercenter or SSIS coupled with most recent 2-3 years of cloud ETL technologies 2+ years of experience in dealing with data pipelines associated with modern data platforms such as Snowflake or Databricks Exposure to Data Virtualization tools such as Starburst or Denodo is a plus Strong problem-solving skills and the ability to optimize and fine-tune data pipelines and Spark jobs for performance. Experience working with data lakes, data warehouses, and distributed computing systems. Experience with Modern Data Stack and Cloud Technologies is a must. Human Skills – 10%: Excellent communication and collaboration skills, with the ability to work effectively in a team environment.
Posted 1 day ago
3.0 years
8 - 30 Lacs
Hyderabad, Telangana, India
On-site
Industry & Sector: Enterprise cloud consulting and data analytics services provider delivering large-scale AWS lakehouse, real-time analytics and AI solutions for global clients. Role & Responsibilities Design, build and optimise high-volume data pipelines on AWS using Spark (Scala) and Glue. Develop reusable ETL frameworks that ingest structured and semi-structured data into S3-based data lakes. Tune Spark jobs for cost, latency and scalability; implement partitioning, caching and checkpointing best practices. Collaborate with data scientists to productionise feature engineering and model-output pipelines. Automate deployment via CloudFormation or Terraform and integrate monitoring with CloudWatch and Prometheus. Champion coding standards, peer reviews and knowledge sharing across the data engineering guild. Skills & Qualifications Must-Have: 3+ years building Spark applications in Scala on AWS; hands-on with Glue, EMR, S3, IAM and Step Functions; proficient in SQL, data modelling and partitioning strategies; version control with Git and CI/CD pipelines (CodePipeline, Jenkins or similar). Preferred: Experience with Delta Lake or Iceberg table formats; knowledge of Python for orchestration tasks; exposure to streaming (Kafka, Kinesis) and near-real-time processing; certification in AWS Data Analytics or Solutions Architect. Benefits & Culture Highlights On-site, engineer-first culture with dedicated R&D sprints and tech conference sponsorship. Performance-linked bonuses and accelerated promotion paths for high impact. Collaborative workspace with wellness programs, hack days and flexible leave policy. Skills: emr,s3,data engineering,python,iceberg,sql,kafka,ci/cd,aws data engineer (spark scala),aws,codepipeline,jenkins,data modelling,git,delta lake,scala,iam,step functions,devops,spark,apache spark,kinesis,glue,partitioning strategies
Posted 1 day ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
20312 Jobs | Dublin
Wipro
11977 Jobs | Bengaluru
EY
8165 Jobs | London
Accenture in India
6667 Jobs | Dublin 2
Uplers
6464 Jobs | Ahmedabad
Amazon
6352 Jobs | Seattle,WA
Oracle
5993 Jobs | Redwood City
IBM
5803 Jobs | Armonk
Capgemini
3897 Jobs | Paris,France
Tata Consultancy Services
3776 Jobs | Thane