About this role
Team Overview:
Data is at the core of the Aladdin platform, and increasingly, our ability to consume, store, analyze, and gain insight from data is a key component of our competitive advantage. The Data Engineering team is responsible for the data ecosystem within BlackRock. We engineer high performance data pipelines, provide a fabric to discover and consume data, and continually evolve our data storage capabilities. We believe in writing small, testable code with a focus on innovation. We are committed to open source, and we regularly contribute our work back to the community.
We are seeking top tier Cloud Native DevOps Platform Engineers to augment our Enterprise Data Platform team. Our objective is to extend our data lifecycle management practices to include structured, semi structured and unstructured data. This role requires a breadth of individual technical capabilities and competencies, though, most important, is a willingness and openness to learning new things across multiple technology disciplines. This role is for practitioners and not researchers.
As a Data Platform Cloud/DevOps Engineer in the Data Engineering team you will: -
-
Work alongside our systems engineers and UI developers to help design and build scalable, automated CI/CD pipelines.
-
Help prove out and productionize infrastructure and tooling to support scalable cloud-based applications
-
Working/Unlocking myriad generative AI/ML use cases for Aladdin Data and thus for BlackRock
-
Have fun as part of an awesome team
Specific Responsibilities:
-
Working as part of a multi-disciplinary squad to establish our next generation of data pipelines and tools
-
Be involved from inception of projects, understanding requirements, designing & developing solutions, and incorporating them into the designs of our platforms
-
Mentor team members on technology and best practices
-
Build and maintain strong relationships between DataOps Engineering and our Technology teams
-
Contribute to the open source community and maintain excellent knowledge of the technical landscape for data & cloud tooling
-
Assist in troubleshooting issues, support the operation of production software
-
Write technical documentation
Desirable Skills
Data Operations and Engineering
-
Comfortable reading and writing python code for data acquisition, ETL/ELT
-
Experience orchestrating data pipelines with AirFlow and/or Argo Workflows
-
Experience implementing and operating telemetry-based monitoring, alerting, and incident response systems. We aim to follow Site Reliability Engineering (SRE) best practices.
-
Experience supporting database or datastores e.g. MongoDB, Redis, Cassandra, Ignite, Hadoop, S3, Azure Blob Store; and various messaging and streaming platforms such as NATS or Kafka
Cloud Native DevOps Platform Engineering
-
Knowledge of the Kubernetes (K8s) APIs with a strong focus on stateful workloads
-
Templating with Helm, ArgoCD, Ansible, and Terraform
-
Understanding of the K8s Operator Pattern -- comfort and courage to wade into (predominantly golang based) operator implementation code bases
-
Comfortable building atop K8s native frameworks including service mesh (Istio), secrets management (cert-manager, HashiCorp Vault), log management (Splunk), observability (Prometheus, Grafana, AlertManager).
-
Experience in creating and evolving CI/CD pipelines with GitLab or Github following GitOps principles
Natural/Large Language Models (Good to have)
-
Experience with NLP coding tasks like tokenization, chunking, tagging, embedding, and indexing supporting subsequent retrieval and enrichment
-
Experience with basic prompt engineering, LLM fine tuning, and chatbot implementations in modern python SDKs like langchain and/or transformers
Overall 5-8 years of hands-on experience in DevOps Cloud or related Engineering practices.
Our benefits
.
Our hybrid work model
.
.