Job
Description
Job Title: Data Engineer – Cloud-Agnostic, Data, Analytics & AI Product Team Location: Hyderabad Employment Type: Full-time Why this role matters Our analytics and AI products are only as good as the data they run on. You will design and operate the pipelines and micro-services that transform multi-structured data into reliable, governed, and instantly consumable assets—regardless of which cloud the customer chooses. Core Skills & Knowledge Programming: Python 3.10+, Pandas or Polars, SQL (ANSI, window functions, CTEs), basic bash. Databases & Warehouses: PostgreSQL, Snowflake (stages, tasks, streams), parquet/Delta-Lake tables on S3/ADLS/GCS. APIs & Services: FastAPI, Pydantic models, OpenAPI specs, JWT/OAuth authentication. Orchestration & Scheduling: Apache Airflow, Dagster, or Prefect; familiarity with event-driven triggers via cloud queues (SQS, Pub/Sub). Cloud Foundations: Hands-on with at least one major cloud (AWS, Azure, GCP) and willingness to write cloud-agnostic code, with a cost-aware development approach. Testing & CI/CD: pytest, GitHub Actions / Azure Pipelines; Docker-first local dev; semantic versioning. Data Governance: Basic understanding of GDPR/PII handling, role-based access, and encryption-at-rest/in-flight. Nice-to-Have / Stretch Skills Streaming ingestion with Kafka / Kinesis / Event Hub and PySpark Structured Streaming. Great Expectations, Soda, or Monte Carlo for data quality monitoring. Graph or time-series stores (Neo4j, TimescaleDB). Experience & Education 6-8 years of overall IT experience with over 4 years of relevant experience building data pipelines or back-end services in production, ideally supporting analytics or ML use-cases. Bachelor’s in Computer Science, Data Engineering, or demonstrably equivalent experience. Show more Show less