Gurgaon, Haryana, India
Not disclosed
On-site
Full Time
Requirements 4-8 years of experience with web frameworks, preferably Python Flask, Django, FastAPI, or similar. Good knowledge of cloud-based CI/CD service offerings from cloud service providers like AWS, GCP, and Azure. Experience with other services in any of the 3 major cloud service providers, i. e. AWS, GCP, or Azure. Strong understanding of the basics of SQL - reading and writing SQL queries, a basic understanding of database interaction tools (like pgAdmin), SQL, columnar databases, vector databases, and database optimization techniques, including indexing, etc. Also, knowledge of AWS Aurora is good to have. Implementation experience with Generative AI with RAG, Agentic solutions with cloud provided/self-hosted LLM models like GPT4o, AWS Claude, Gemini, etc., is a desired skill. Good knowledge of API development and testing - including but not limited to HTTP, RESTful services, Postman, and allied cloud-based services like API Gateway. Strong coding/debugging/problem-solving abilities and should have good knowledge of Python. Good to have experience with pip, setuptools, etc. A technical background in data with a deep understanding of issues in multiple areas, such as data acquisition, ingestion, and processing, data management, distributed processing, and high availability, is required. Quality delivery is the highest priority. Should know about industry best practices and standards in building and delivering performant and scalable APIs. Possesses demonstrated expertise in team management and as an individual contributor. B. E. / B. Tech, MCA, M. E / M. Tech. This job was posted by Nikitaseles Pinto from Hashedin by Deloitte. Show more Show less
Gurugram, Haryana, India
Not disclosed
On-site
Full Time
Position - Technical Lead Location - Bangalore/Pune/Hyderabad/Gurugram/Kolkata/Chennai/Mumbai Experience - 8+ Years ABOUT HASHEDIN We are software engineers who solve business problems with a Product Mindset for leading global organizations. By combining engineering talent with business insight, we build software and products that can create new enterprise value. The secret to our success is a fast-paced learning environment, an extreme ownership spirit, and a fun culture. WHY SHOULD YOU JOIN US? With the agility of a start-up and the opportunities of an enterprise , every day at HashedIn, your work will make an impact that matters. So, if you are a problem solver looking to thrive in a dynamic fun culture of inclusion, collaboration, and high performance – HashedIn is the place to be! From learning to leadership, this is your chance to take your software engineering career to the next level. So, what impact will you make? Visit us @ https://hashedin.com JOB TITLE: Data Integration Tech Lead (Oracle ODI) We are seeking an energetic and technically proficient Data Integration Tech Lead to design, build, and optimize robust data integration and analytics solutions using the Oracle technology stack. This role puts you at the core of our enterprise data modernization efforts, responsible for designing, implementing, and maintaining end-to-end data integration pipelines across traditional and cloud platforms. You will leverage your expertise in Oracle Data Integrator (ODI), Oracle Integration Cloud (OIC), and related technologies to drive efficient data movement, transformation, and loading while maintaining the highest standards of data quality, lineage, and governance. You will work hands-on and lead a small team of developers, shaping best practices for data integration workflows and collaborating with Analytics/BI teams to deliver fit-for-purpose solutions. Mandatory Skills: Experience: 6–8 years of progressive experience in enterprise data integration, with at least 4 years hands-on experience in Oracle Data Integrator (ODI). Strong understanding and working experience with Oracle Integration Cloud (OIC), Oracle databases, and related cloud infrastructure. Proven track record in designing and implementing large-scale ETL/ELT solutions across hybrid (on-prem/cloud) architectures. Technical Proficiency: Deep hands-on expertise with ODI components (Topology, Designer, Operator, Agent) and OIC (Integration patterns, adapters, process automation). Strong command of SQL and PL/SQL for data manipulation and transformation. Experience with REST/SOAP APIs, batch scheduling, and scripting (Python, Shell, or similar) for process automation. Data modeling proficiency (logical/physical, dimensional, OLAP/OLTP). Familiarity with Oracle Analytics Cloud (OAC), OBIEE, and integration into analytics platforms. Solid understanding of data quality frameworks, metadata management, and lineage documentation. Setting up topology, building objects in Designer, Monitoring Operator, different type of KMs, Agents etc Packaging components, database operations like Aggregate pivot, union etc. Using ODI mappings, error handling, automation using ODI, Migration of Objects Design and develop complex mappings, Process Flows and ETL scripts Expertise in developing Load Plans, Scheduling Jobs Ability to design data quality and reconciliation framework using ODI Integrate ODI with multiple Source / Target Show more Show less
Gurugram, Haryana, India
None Not disclosed
On-site
Full Time
POSITION - Software Engineer – Data Engineering LOCATION - Bangalore/Mumbai/Kolkata/Gurugram/Hyderabad/Pune/Chennai EXPERIENCE - 5-9 Years ABOUT HASHEDIN We are software engineers who solve business problems with a Product Mindset for leading global organizations. By combining engineering talent with business insight, we build software and products that can create new enterprise value. The secret to our success is a fast-paced learning environment, an extreme ownership spirit, and a fun culture. JOB TITLE: Software Engineer – Data Engineering OVERVIEW OF THE ROLE: As a Data Engineer or Senior Data Engineer, you will be hands-on in architecting, building, and optimizing robust, efficient, and secure data pipelines and platforms that power business critical analytics and applications. You will play a central role in the implementation and automation of scalable batch and streaming data workflows using modern big data and cloud technologies. Working within cross-functional teams, you will deliver well-engineered, high quality code and data models, and drive best practices for data reliability, lineage, quality, and security Mandatory Skills: • Hands-on software coding or scripting for minimum 4 years • Experience in product management for at-least 4 years • Stakeholder management experience for at-least 4 years • Experience in one amongst GCP, AWS or Azure cloud platform Key Responsibilities: • Design, build, and optimize scalable data pipelines and ETL/ELT workflows using Spark (Scala/Python), SQL, and orchestration tools (e.g., Apache Airflow, Prefect, Luigi). • Implement efficient solutions for high-volume, batch, real-time streaming, and eventdriven data processing, leveraging best-in-class patterns and frameworks. • Build and maintain data warehouse and lakehouse architectures (e.g., Snowflake, Databricks, Delta Lake, BigQuery, Redshift) to support analytics, data science, and BI workloads. • Develop, automate, and monitor Airflow DAGs/jobs on cloud or Kubernetes, following robust deployment and operational practices (CI/CD, containerization, infra-as-code). • Write performant, production-grade SQL for complex data aggregation, transformation, and analytics tasks. • Ensure data quality, consistency, and governance across the stack, implementing processes for validation, cleansing, anomaly detection, and reconciliation General Skills & Experience: • Proficiency with Spark (Python or Scala), SQL, and data pipeline orchestration (Airflow, Prefect, Luigi, or similar). • Experience with cloud data ecosystems (AWS, GCP, Azure) and cloud-native services for data processing (Glue, Dataflow, Dataproc, EMR, HDInsight, Synapse, etc.) Hands-on development skills in at least one programming language (Python, Scala, or Java preferred); solid knowledge of software engineering best practices (version control, testing, modularity). • Deep understanding of batch and streaming architectures (Kafka, Kinesis, Pub/Sub, Flink, Structured Streaming, Spark Streaming). • Expertise in data warehouse/lakehouse solutions (Snowflake, Databricks, Delta Lake, BigQuery, Redshift, Synapse) and storage formats (Parquet, ORC, Delta, Iceberg, Avro). • Strong SQL development skills for ETL, analytics, and performance optimization. • Familiarity with Kubernetes (K8s), containerization (Docker), and deploying data pipelines in distributed/cloud-native environments. • Experience with data quality frameworks (Great Expectations, Deequ, or custom validation), monitoring/observability tools, and automated testing. • Working knowledge of data modeling (star/snowflake, normalized, denormalized) and metadata/catalog management. • Understanding of data security, privacy, and regulatory compliance (access management, PII masking, auditing, GDPR/CCPA/HIPAA). • Familiarity with BI or visualization tools (PowerBI, Tableau, Looker, etc.) is an advantage but not core. • Previous experience with data migrations, modernization, or refactoring legacy ETL processes to modern cloud architectures is a strong plus. • Bonus: Exposure to open-source data tools (dbt, Delta Lake, Apache Iceberg, Amundsen, Great Expectations, etc.) and knowledge of DevOps/MLOps processes EDUCATIONAL QUALIFICATIONS : • Bachelor’s or Master’s degree in Computer Science, Data Engineering, Information Systems, or related field (or equivalent experience). • Certifications in cloud platforms (AWS, GCP, Azure) and/or data engineering (AWS Data Analytics, GCP Data Engineer, Databricks). • Experience working in an Agile environment with exposure to CI/CD, Git, Jira, Confluence, and code review processes. • Prior work in highly regulated or large-scale enterprise data environments (finance, healthcare, or similar) is a plus Show more Show less
Bengaluru, Karnataka, India
None Not disclosed
On-site
Full Time
General Skills & Experience: Minimum 10-18 yrs of Experience • Expertise in Spark (Scala/Python), Kafka, and cloud-native big data services (GCP, AWS, Azure) for ETL, batch, and stream processing. • Deep knowledge of cloud platforms (AWS, Azure, GCP), including certification (preferred). • Experience designing and managing advanced data warehousing and lakehouse architectures (e.g., Snowflake, Databricks, Delta Lake, BigQuery, Redshift, Synapse). • Proven experience with building, managing, and optimizing ETL/ELT pipelines and data workflows for large-scale systems. • Strong experience with data lakes, storage formats (Parquet, ORC, Delta, Iceberg), and data movement strategies (cloud and hybrid). • Advanced knowledge of data modeling, SQL development, data partitioning, optimization, and database administration. • Solid understanding and experience with Master Data Management (MDM) solutions and reference data frameworks. • Proficient in implementing Data Lineage, Data Cataloging, and Data Governance solutions (e.g., AWS Glue Data Catalog, Azure Purview). • Familiar with data privacy, data security, compliance regulations (GDPR, CCPA, HIPAA, etc.), and best practices for enterprise data protection. • Experience with data integration tools and technologies (e.g. AWS Glue, GCP Dataflow , Apache Nifi/Airflow, etc.). • Expertise in batch and real-time data processing architectures; familiarity with event-driven, microservices, and message-driven patterns. • Hands-on experience in Data Analytics, BI & visualization tools (PowerBI, Tableau, Looker, Qlik, etc.) and supporting complex reporting use-cases. • Demonstrated capability with data modernization projects: migrations from legacy/on-prem systems to cloud-native architectures. • Experience with data quality frameworks, monitoring, and observability (data validation, metrics, lineage, health checks). • Background in working with structured, semi-structured, unstructured, temporal, and time series data at large scale. • Familiarity with Data Science and ML pipeline integration (DevOps/MLOps, model monitoring, and deployment practices). • Experience defining and managing enterprise metadata strategies.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.