Software Engineer II A - GBS IND

7 - 9 years

0 Lacs

Posted:2 days ago| Platform: Linkedin logo

Apply

Work Mode

On-site

Job Type

Full Time

Job Description

About Us

At Bank of America, we are guided by a common purpose to help make financial lives better through the power of every connection. Responsible Growth is how we run our company and how we deliver for our clients, teammates, communities, and shareholders every day.One of the keys to driving Responsible Growth is being a great place to work for our teammates around the world. We’re devoted to being a diverse and inclusive workplace for everyone. We hire individuals with a broad range of backgrounds and experiences and invest heavily in our teammates and their families by offering competitive benefits to support their physical, emotional, and financial well-being.Bank of America believes both in the importance of working together and offering flexibility to our employees. We use a multi-faceted approach for flexibility, depending on the various roles in our organization.Working at Bank of America will give you a great career with opportunities to learn, grow and make an impact, along with the power to make a difference. Join us!

Global Business Services

Global Business Services delivers Technology and Operations capabilities to Lines of Business and Staff Support Functions of Bank of America through a centrally managed, globally integrated delivery model and globally resilient operations.Global Business Services is recognized for flawless execution, sound risk management, operational resiliency, operational excellence, and innovation.In India, we are present in five locations and operate as BA Continuum India Private Limited (BACI), a non-banking subsidiary of Bank of America Corporation and the operating company for India operations of Global Business Services.

Process Overview*

The Data Analytics Strategy platform and decision tool team is responsible for Data strategy for entire CSWT and development of platforms which supports the Data Strategy. Data Science platform, Graph Data Platform, Enterprise Events Hub are key platforms of Data Platform initiative.As a Senior Hadoop Developer to develop Hadoop components in SDP (strategic data platform), individual will be responsible for understanding design, propose high level and detailed design solutions, and ensure that coding practices/quality comply with software development standards. Working as an individual contributor in projects, person should have good analytical skills to take a quick decision during the tough times, Person should have good knowledge writing complex queries in a larger cluster. Engage in discussions with architecture teams for coming out with design solutions, proposing new technology adoption ideas, attending project meetings, partnering with near shore and offshore teammates in an agile environment, coordinating with other application teams, development, testing, upstream and downstream partners, etc.

Responsibilities:

  • Develop high-performance and scalable Analytics solutions using the Big Data platform to facilitate the collection, storage, and analysis of massive data sets from multiple channels.
  • Develop efficient utilities, data pipelines, ingestion frameworks that can be utilized across multiple business areas.
  • Utilize your in-depth knowledge of Hadoop stack and storage technologies, including HDFS, Spark, MapReduce, Yarn, Hive, Sqoop, Impala, Hue, and Oozie, to design and optimize data processing workflows.
  • Data analysis, coding, Performance Tunning, propose improvement ideas, drive the development activities at offshore.
  • Analyze complex Hive Queries, able to modify Hive queries, tune Hive Queries
  • Hands on experiences writing scripts in python/shell scripts and modify scripts.
  • Provide guidance and mentorship to junior teammates.
  • Work with the strategic partners to understand the requirements work on high level & detailed design to address the real time issues in production.
  • Partnering with near shore and offshore teammates in Agile environment, coordinating with other application teams, development, testing, up/down stream partners, etc.
  • Hands on experiences writing scripts in python/shell scripts and modify scripts.
  • Work on multiple projects concurrently, take ownership & pride in the work done by them, attending project meetings, understanding requirements, designing solutions, developing code.
  • Identify gaps in technology and propose viable solutions.
  • Identify improvement areas within the application and work with the respective teams to implement the same.
  • Ensuring adherence to defined process & quality standards, best practices, high quality levels in all deliverables.

Desired Skills*

  • Data Lake Architecture: Understanding of Medallion architecture
  • ingestion Frameworks: Knowledge of ingestion frameworks like structured, unstructured, and semi structured
  • Data Warehouse: Familiarity with Apache Hive and Impala
  • Performs Continuous Integration and Continuous Development (CI-CD) activities.
  • Hands on experience working in a Cloudera data platform (CDP) to support the Data Science
  • Contributes to story refinement and definition of requirements.
  • Participates in estimating work necessary to realize a story/requirement through the delivery lifecycle.
  • Extensive hands-on supporting platforms to allow modelling and analysts go through the complete model lifecycle management (data munging, model develop/train, governance, deployment)
  • Experience with model deployment, scoring and monitoring for batch and real-time on various technologies and platforms.
  • Experience in Hadoop cluster and integration includes ETL, streaming and API styles of integration.
  • Experience in automation for deployment using Ansible Playbooks, scripting.
  • Experience with developing and building RESTful API services in an efficient and scalable manner.
  • Design and build and deploy streaming and batch data pipelines capable of processing and storing large datasets quickly and reliably using Kafka, Spark and YARN for large volumes of data (TBs)
  • Experience with processing and deployment technologies such YARN, Kubernetes /Containers and Serverless Compute for model development and training.
  • Effective communication, Strong stakeholder engagement skills, Proven ability in leading and mentoring a team of software engineers in a dynamic environment.

Requirements*

Education*

  • Graduation / Post Graduation

Experience Range*

  • 7 to 9 years

Foundational Skills

  • Hadoop, Hive, Sqoop, Impala, Unix/Linux scripts.

Desired Skills

  • Python, CI/CD, ETL.

Work Timings*

11:30 AM to 8:30 PM IST

Job Location*

Chennai

Mock Interview

Practice Video Interview with JobPe AI

Start Python Interview
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

coding practice

Enhance Your Python Skills

Practice Python coding challenges to boost your skills

Start Practicing Python Now

RecommendedJobs for You