Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
0.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Job Role: We are seeking a highly skilled and experienced Data Architect with expertise in designing and building data platforms in cloud environments. The ideal candidate will have a strong background in either AWS Data Engineering or Azure Data Engineering, along with proficiency in distributed data processing systems like Spark. Additionally, proficiency in SQL, data modeling, building data warehouses, and knowledge of ingestion tools and data governance are essential for this role. TheData Architect will also need experience with orchestration tools such as Airflow or Dagster and proficiency in Python, with knowledge of Pandas being beneficial. ? Why Choose Ideas2IT Ideas2IT has all the good attributes of a product startup and a services company. Since we launch our products, you will have ample opportunities to learn and contribute. However, single-product companies stagnate in the technologies they use. In our multiple product initiatives and customer-facing projects, you will have the opportunity to work on various technologies. AGI is going to change the world. Big companies like Microsoft are betting heavily on this (seehereandhere). We are following suit. ? Whats in it for you? You will get to work on impactful products instead of back-office applications for the likes of customers like Facebook, Siemens, Roche, and more You will get to work on interesting projects like the Cloud AI platform for personalized cancer treatment Opportunity to continuously learn newer technologies Freedom to bring your ideas to the table and make a difference, instead of being a small cog in a big wheel Showcase your talent in Shark Tanks and Hackathons conducted in the company ? ?Heres what youll bring? Experience in designing and building data platforms in any cloud. Strong expertise in either AWS Data Engineering or Azure Data Engineering Develop and optimize data processing pipelines using distributed systems like Spark. Create and maintain data models to support efficient storage and retrieval. Build and optimize data warehouses for analytical and reporting purposes, utilizing technologies such as Postgres, Redshift, Snowflake, etc. Knowledge of ingestion tools such as Apache Kafka, Apache Nifi, AWS Glue, or Azure DataFactory. Establish and enforce data governance policies and procedures to ensure data quality and security. Utilize orchestration tools like Airflow or Dagster to schedule and manage data workflows. Develop scripts and applications in Python to automate tasks and processes. Collaborate with stakeholders to gather requirements and translate them into technical specifications. Communicate technical solutions effectively to clients and stakeholders. Familiarity with multiple cloud ecosystems such as AWS, Azure, and Google Cloud Platform(GCP). Experience with containerization and orchestration technologies like Docker and Kubernetes. Knowledge of machine learning and data science concepts. Experience with data visualization tools such as Tableau or Power BI. Understanding of DevOps principles and practices. Show more Show less
Posted 1 day ago
5.0 - 9.0 years
0 Lacs
karnataka
On-site
As a Senior Python Engineer at our company, you will leverage your deep expertise in data engineering and API development to drive technical excellence and autonomy. Your primary responsibility will be leading the development of scalable backend systems and data infrastructure that power AI-driven applications across our platform. You will design, develop, and maintain high-performance APIs and microservices using Python frameworks such as FastAPI and Flask. Additionally, you will build and optimize scalable data pipelines, ETL/ELT processes, and orchestration frameworks, ensuring the utilization of AI development tools like GitHub Copilot, Cursor, or CodeWhisperer to enhance engineering velocity and code quality. In this role, you will architect resilient and modular backend systems integrated with databases like PostgreSQL, MongoDB, and Elasticsearch. Managing workflows and event-driven architectures using tools such as Airflow, Dagster, or Temporal.io will be essential, as you collaborate with cross-functional teams to deliver production-grade systems in cloud environments (AWS/GCP/Azure) with high test coverage, observability, and reliability. To be successful in this position, you must have at least 5 years of hands-on experience in Python backend/API development, a strong background in data engineering, and proficiency in AI-enhanced development environments like Copilot, Cursor, or equivalent tools. Solid experience with Elasticsearch, PostgreSQL, and scalable data solutions, along with familiarity with Docker, CI/CD, and cloud-native deployment practices is crucial. You should also demonstrate the ability to take ownership of features from idea to production. Nice-to-have qualifications include experience with distributed workflow engines like Temporal.io, background in AI/ML systems (PyTorch or TensorFlow), familiarity with LangChain, LLMs, and vector search tools (e.g., FAISS, Pinecone), and exposure to weak supervision, semantic search, or agentic AI workflows. Join us to build infrastructure for cutting-edge AI products and work in a collaborative, high-caliber engineering environment.,
Posted 2 days ago
0.0 - 4.0 years
0 Lacs
karnataka
On-site
You should have a Bachelor's degree in Computer Science, Information Technology, or a related field. A strong understanding of operating systems, networking basics, and Linux command-line usage is essential. Proficiency in at least one scripting language such as Python or Bash is required. Basic knowledge of cloud computing concepts, with a preference for AWS, is expected. Familiarity with DevOps principles like CI/CD, automation, and cloud infrastructure management is a plus. Awareness of version control systems like Git is necessary. It would be beneficial to have exposure to cloud platforms, preferably AWS, and infrastructure services such as EC2, S3, RDS, and Kubernetes. Understanding of Infrastructure as Code concepts and knowledge of Terraform would be advantageous. Basic knowledge of CI/CD tools like GitLab or AzureDevOps is a plus. Awareness of monitoring concepts and observability tools like New Relic and Grafana is beneficial. Basic knowledge about containerization, automation, or data/ML infra tools such as Docker, Ray, Dagster, Weights & Biases is an advantage. Exposure to scripting tasks for automation and ops workflows using Python is desired. Joining Sanas will allow you to gain real-world experience in managing cloud infrastructure, including AWS and Azure, as well as COLO datacenter. You will work on infrastructure automation using Terraform and Python, CI/CD pipeline development and management with GitLab and Spinnaker, and observability and monitoring with tools like New Relic, Grafana, and custom alerting mechanisms. You will also have the opportunity to work with cutting-edge tools in ML/AI infrastructure like Ray, Dagster, W&B, and data analytics tools such as ClickHouse and Aurora PostgreSQL. Additionally, you will learn about agile delivery models and collaborate with Engineering, Science, InfoSec, and ML teams. We offer hands-on experience with modern DevOps practices and enterprise cloud architecture, mentorship from experienced DevOps engineers, exposure to a scalable infrastructure supporting production-grade AI and ML workloads, and an opportunity to contribute to automation, reliability, and security for systems. You will participate in occasional on-call rotations to maintain system availability in a collaborative and fast-paced learning environment where your work directly supports engineering and innovation.,
Posted 4 days ago
2.0 - 4.0 years
7 - 11 Lacs
Jaipur
Work from Office
Position Overview We are seeking a skilled Data Engineer with 2-4 years of experience to design, build, and maintain scalable data pipelines and infrastructure. You will work with modern data technologies to enable data-driven decision making across the organisation. Key Responsibilities Design and implement ETL/ELT pipelines using Apache Spark and orchestration tools (Airflow/Dagster). Build and optimize data models on Snowflake and cloud platforms. Collaborate with analytics teams to deliver reliable data for reporting and ML initiatives. Monitor pipeline performance, troubleshoot data quality issues, and implement testing frameworks. Contribute to data architecture decisions and work with cross-functional teams to deliver quality data solutions. Required Skills & Experience 2-4 years of experience in data engineering or related field Strong proficiency with Snowflake including data modeling, performance optimisation, and cost management Hands-on experience building data pipelines with Apache Spark (PySpark) Experience with workflow orchestration tools (Airflow, Dagster, or similar) Proficiency with dbt for data transformation, modeling, and testing Proficiency in Python and SQL for data processing and analysis Experience with cloud platforms (AWS, Azure, or GCP) and their data services Understanding of data warehouse concepts, dimensional modeling, and data lake architectures Preferred Qualifications Experience with infrastructure as code tools (Terraform, CloudFormation) Knowledge of streaming technologies (Kafka, Kinesis, Pub/Sub) Familiarity with containerisation (Docker, Kubernetes) Experience with data quality frameworks and monitoring tools Understanding of CI/CD practices for data pipelines Knowledge of data catalog and governance tools Advanced dbt features including macros, packages, and documentation Experience with table format technologies (Apache Iceberg, Apache Hudi) Technical Environment Data Warehouse: Snowflake Processing: Apache Spark, Python, SQL Orchestration: Airflow/Dagster Transformation: dbt Cloud: AWS/Azure/GCP Version Control: Git Monitoring: DataDog, Grafana, or similar
Posted 6 days ago
5.0 - 9.0 years
0 Lacs
maharashtra
On-site
Your role: As a passionate technical development lead specializing in the GRC space, you will play a crucial role in delivering efficient configuration and customization of large GRC installations. Your responsibilities will include providing technical leadership, working on strategic programs, and collaborating with a high-performance team. You should possess a strong analytical and technical ability with a minimum of 5 years of experience in Cloud, DevOps, UI frameworks, Java, REST, and DB skills. The ideal candidate will be able to work independently, communicate effectively, and have a proven track record of working on large complex development projects. You will leverage your skills and knowledge to develop high-quality solutions that meet business needs while driving continuous integration and improvements. Additionally, you will collaborate with the Senior Tech Lead & Solution Architect to provide valuable inputs on application design. Your team: You will be part of the Compliance & Operational Risk IT team, a global team responsible for designing and implementing innovative IT solutions to track complex regulatory requirements in the financial services industry. The team is spread across various locations including the US, UK, Switzerland, and India, providing support to internal clients worldwide. Your expertise: - CI/CD pipeline creation and deployment into production (incl. GitOps practices) enabling Canary / rolling deployment, Blue/Green, Feature Flags Observability of deployments, chaos engineering - Relational DBs (SQL / PostgresSQL) - Data Flow and ETL (e.g. Airflow/Dagster similar) - JVM based languages (Java8/11+, Scala, Kotlin) - Knowledge on M7 preferred (eve About Company: Purview is a leading Digital Cloud & Data Engineering company with headquarters in Edinburgh, United Kingdom and a presence in 14 countries including India, Poland, Germany, Finland, Netherlands, Ireland, USA, UAE, Oman, Singapore, Hong Kong, Malaysia, and Australia. The company has a strong presence in the UK, Europe, and APEC regions, providing services to Captive Clients and top IT tier 1 organizations. Company Info: India Office: 3rd Floor, Sonthalia Mind Space Near Westin Hotel, Gafoor Nagar Hitechcity, Hyderabad Phone: +91 40 48549120 / +91 8790177967 UK Office: Gyleview House, 3 Redheughs Rigg, South Gyle, Edinburgh, EH12 9DQ. Phone: +44 7590230910 Email: careers@purviewservices.com Login to Apply!,
Posted 2 weeks ago
6.0 - 10.0 years
12 - 20 Lacs
Pune, Delhi / NCR, Mumbai (All Areas)
Hybrid
Role & responsibilities (Exp is required 6+ Years) Job Description: Enterprise Business Technology is on a mission to support and create enterprise software for our organization. We're a highly collaborative team that interlocks with corporate functions such as Finance and Product teams to deliver value with innovative technology solutions. Each day, thousands of people rely on Enlyte's technology and services to help their customers during challenging life events. We're looking for a remote Senior Data Analytics Engineer for our Corporate Analytics team. Opportunity - Technical lead for our corporate analytics practice using dbt, Dagster, Snowflake and Power BI, SQL and Python Responsibilities Build our data pipelines for our data warehouse in Python working with APIs to source data Build power bi reports and dashboards associated to this process Contribute to our strategy for new data pipelines and data engineering approaches Maintain a medallion based architecture for data analysis with Kimball Participates in daily scrum calls, follows agile SDLC Creates meaningful documentation of their work Follow organizational best practices for dbt and writes maintainable code Qualifications 5+ years of professional experience as a Data Engineer Strong dbt experience (3+ years) and knowledge of modern data stack Strong experience with Snowflake (3+ years) You have experience using Dagster and running complex pipelines (1+ year) Some Python experience, experience with git and Azure Devops Experience with data modeling in Kimball and medallion based structures
Posted 4 weeks ago
13.0 - 20.0 years
40 - 45 Lacs
Bengaluru
Work from Office
Principal Architect - Platform & Application Architect Experience 15+ years in software/data platform architecture 5+ years in architectural leadership roles Architecture & Data Platform Expertise Education Bachelors/Master’s in CS, Engineering, or related field Title: Principal Architect Location: Onsite Bangalore Experience: 15+ years in software & data platform architecture and technology strategy Role Overview We are seeking a Platform & Application Architect to lead the design and implementation of a next-generation, multi-domain data platform and its ecosystem of applications. In this strategic and hands-on role, you will define the overall architecture, select and evolve the technology stack, and establish best practices for governance, scalability, and performance. Your responsibilities will span across the full data lifecycle—ingestion, processing, storage, and analytics—while ensuring the platform is adaptable to diverse and evolving customer needs. This role requires close collaboration with product and business teams to translate strategy into actionable, high-impact platform & products. Key Responsibilities 1. Architecture & Strategy Design the end-to-end architecture for a On-prem / hybrid data platform (data lake/lakehouse, data warehouse, streaming, and analytics components). Define and document data blueprints, data domain models, and architectural standards. Lead build vs. buy evaluations for platform components and recommend best-fit tools and technologies. 2. Data Ingestion & Processing Architect batch and real-time ingestion pipelines using tools like Kafka, Apache NiFi, Flink, or Airbyte. Oversee scalable ETL/ELT processes and orchestrators (Airflow, dbt, Dagster). Support diverse data sources: IoT, operational databases, APIs, flat files, unstructured data. 3. Storage & Modeling Define strategies for data storage and partitioning (data lakes, warehouses, Delta Lake, Iceberg, or Hudi). Develop efficient data strategies for both OLAP and OLTP workloads. Guide schema evolution, data versioning, and performance tuning. 4. Governance, Security, and Compliance Establish data governance , cataloging , and lineage tracking frameworks. Implement access controls , encryption , and audit trails to ensure compliance with DPDPA, GDPR, HIPAA, etc. Promote standardization and best practices across business units. 5. Platform Engineering & DevOps Collaborate with infrastructure and DevOps teams to define CI/CD , monitoring , and DataOps pipelines. Ensure observability, reliability, and cost efficiency of the platform. Define SLAs, capacity planning, and disaster recovery plans. 6. Collaboration & Mentorship Work closely with data engineers, scientists, analysts, and product owners to align platform capabilities with business goals. Mentor teams on architecture principles, technology choices, and operational excellence. Skills & Qualifications Bachelor’s or Master’s degree in Computer Science, Engineering, or a related field. 12+ years of experience in software engineering, including 5+ years in architectural leadership roles. Proven expertise in designing and scaling distributed systems, microservices, APIs, and event-driven architectures using Java, Python, or Node.js. Strong hands-on experience with building scalable data platforms on premise/Hybrid/cloud environments. Deep knowledge of modern data lake and warehouse technologies (e.g., Snowflake, BigQuery, Redshift) and table formats like Delta Lake or Iceberg. Familiarity with data mesh, data fabric, and lakehouse paradigms. Strong understanding of system reliability, observability, DevSecOps practices, and platform engineering principles. Demonstrated success in leading large-scale architectural initiatives across enterprise-grade or consumer-facing platforms. Excellent communication, documentation, and presentation skills, with the ability to simplify complex concepts and influence at executive levels. Certifications such as TOGAF or AWS Solutions Architect (Professional) and experience in regulated domains (e.g., finance, healthcare, aviation) are desirable.
Posted 1 month ago
5.0 - 7.0 years
13 - 15 Lacs
Pune
Work from Office
About us: We are building a modern, scalable, fully automated on-premise data platform , designed to handle complex data workflows, including data ingestion, ETL processes, physics-based calculations and machine learning predictions. Orchestrated using Dagster , our platform integrates with multiple data sources, edge devices, and storage systems. A core principle of our architecture is self-service : granting data scientists, analysts, and engineers granular control over the entire journey of their data assets as well empowering teams to modify and extend their data pipelines with minimal friction. We're looking for a hands-on Data Engineer to help develop, maintain, and optimize this platform. Role & responsibilities: - Design, develop, and maintain robust data pipelines using Dagster for orchestration - Build and manage ETL pipelines with python and SQL - Optimize performance and reliability of the platform within on-premise infrastructure constraints - Develop solutions for processing and aggregating data on edge devices , including data filtering, compression, and secure transmission - Maintain metadata, data lineage, ensure data quality, consistency, and compliance with governance and security policies - Implement CI/CD workflows of the platform on a local Kubernetes cluster - Architect the platform with a self-service mindset , including clear abstractions, reusable components, and documentation - Develop in collaboration with data scientists, analysts, and frontend developers to understand evolving data needs - Define and maintain clear contracts/interfaces with source systems , ensuring resilience to upstream changes Preferred candidate profile: -5-7 years of experience in database-driven projects or related fields. -1-2 years of experience with data platforms, orchestration, and big data management. -Proven experience as a Data Engineer or similar role, with focus on backend data processing and infrastructure -Hands-on experience with Dagster or similar data orchestration tools (e.g., Airflow, Prefect, Luigi, Databricks) - Proficiency with SQL and Python - Strong understanding of data modeling , ETL/ELT best practices, and batch/stream processing - Familiarity with on-premises deployments and challenges (e.g., network latency, storage constraints, resource management) - Experience with version control (Git) and CI/CD practices for data workflows - Understanding of data governance , access control , and data cataloging
Posted 2 months ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39581 Jobs | Dublin
Wipro
19070 Jobs | Bengaluru
Accenture in India
14409 Jobs | Dublin 2
EY
14248 Jobs | London
Uplers
10536 Jobs | Ahmedabad
Amazon
10262 Jobs | Seattle,WA
IBM
9120 Jobs | Armonk
Oracle
8925 Jobs | Redwood City
Capgemini
7500 Jobs | Paris,France
Virtusa
7132 Jobs | Southborough