Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
6.0 years
0 Lacs
Kochi, Kerala, India
On-site
We are seeking an experienced Data Engineer to design and implement scalable data solutions The ideal candidate will have deep expertise in cloud data warehousing, ETL/ELT processes, data modeling, and business intelligence. Requirements 6+ years of experience in data engineering. Deep expertise in AWS Redshift, including data modeling, query optimization, and cluster management. Good understanding and experience in implementing Data Quality, lineage, data quality, data governance etc. Strong experience in Apache Airflow for workflow orchestration and scheduling. Proficiency in dbt for data transformation and modeling. Good experience in Azure data stack can also be considered. Experience creating dashboards and reports in Tableau; Excellent SQL skills and experience with Python, Java or Scala; Knowledge of data warehousing concepts and dimensional modeling. Strong communication skills and ability to work cross-functionally. Bachelor's or Master's degree in Computer Science, Engineering, or related field.
Posted 2 weeks ago
6.0 years
0 Lacs
Trivandrum, Kerala, India
On-site
We are seeking an experienced Data Engineer to design and implement scalable data solutions The ideal candidate will have deep expertise in cloud data warehousing, ETL/ELT processes, data modeling, and business intelligence. Requirements 6+ years of experience in data engineering. Deep expertise in AWS Redshift, including data modeling, query optimization, and cluster management. Good understanding and experience in implementing Data Quality, lineage, data quality, data governance etc. Strong experience in Apache Airflow for workflow orchestration and scheduling. Proficiency in dbt for data transformation and modeling. Good experience in Azure data stack can also be considered. Experience creating dashboards and reports in Tableau; Excellent SQL skills and experience with Python, Java or Scala; Knowledge of data warehousing concepts and dimensional modeling. Strong communication skills and ability to work cross-functionally. Bachelor's or Master's degree in Computer Science, Engineering, or related field.
Posted 2 weeks ago
6.0 years
0 Lacs
Trivandrum, Kerala, India
On-site
We are seeking an experienced Data Engineer to design and implement scalable data solutions The ideal candidate will have deep expertise in cloud data warehousing, ETL/ELT processes, data modeling, and business intelligence. Requirements 6+ years of experience in data engineering; Deep expertise with AWS Redshift, including data modeling, query optimization, and cluster management Good understanding and experience in implementing Data Quality, lineage, data quality, data governance etc. Strong experience in Apache Airflow for workflow orchestration and scheduling. Proficiency with dbt for data transformation and modeling. Good experience in Azure data stack can also be considered. Experience creating dashboards and reports in Tableau; Excellent SQL skills and experience with Python, Java or Scala; Knowledge of data warehousing concepts and dimensional modeling. Strong communication skills and ability to work cross-functionally. Bachelor's or Master's degree in Computer Science, Engineering, or related field.
Posted 2 weeks ago
2.0 years
0 Lacs
Nagpur, Maharashtra, India
On-site
Job Opening Notification For :- Data Engineer (Azure/AWS) Experience :- 2 year - 3 year Qualification :- BE Computer Science or related field Gender :- Male / Female Skills Required :- *Experience with Azure Data Factory or AWS Glue, S3, Redshift *Strong in SQL, Python, and Apache Spark *Designing & maintaining ETL/ELT data pipelines *Hands-on with cloud data architecture & warehousing *Comfortable working with large-scale datasets Salary :- 40,000 - 50,000 per month Job Location :- Nehru nagar,Nagpur Industry :- Industrial IoT Consulting & Development Services
Posted 2 weeks ago
0 years
0 Lacs
Mumbai, Maharashtra, India
On-site
Title: Data Analyst - Marketing Location: Mumbai (Lower Parel) Experience: 1 +yrs About the Company: Knya is India’s leading online medical apparel brand. The brand was founded in 2021 by husband-wife duo, Vanshika and Abhijeet Kaji. The vision of Knya is to fill in the gap of functional and fashionable medical apparel for the medical industry across the country at multiple levels. Knya has raised funding, led by DSG Consumer Partners, with participation from the promoter family of Narayana Health and Cipla Health amongst others. Since inception, Knya products have been delivered to 900+ cities and 200+ medical institutes across India. Position Overview As a Direct-to-Consumer (D2C) Analytics Associate, you will play a crucial role in analyzing and interpreting data related to the company's D2C marketing operations. Your primary responsibility will be to leverage data analytics to provide insights into customer behavior, sales performance, and overall business trends. You will work closely with cross-functional teams to enhance the effectiveness of D2C strategies and contribute to data-driven decision-making. Key Responsibilities: Analyze D2C data sets to identify trends, patterns, and insights related to customer behavior, product performance, and sales metrics. Generate regular and ad-hoc reports to communicate key findings to relevant stakeholders. Collaborate with marketing teams to optimize customer segmentation strategies. Monitor and evaluate the performance of D2C channels, ad campaigns, and promotions. Identify areas for improvement and provide recommendations for optimizing marketing and sales efforts. Develop and maintain forecasting models to predict future D2C sales, customer acquisition, and other relevant metrics. Use predictive modeling techniques to anticipate customer behavior and market trends. Provide data-driven insights to support strategic decision-making processes. Ensure the accuracy and reliability of D2C data by implementing data quality assurance processes. Work with relevant teams to address and resolve data discrepancies. Stay updated on industry best practices, tools, and technologies related to D2C analytics. Must haves : Web Development experience to understand and work with data layer implementations. Knowledge of Python and SQL for data manipulation and analysis. Proficiency in Looker Studio for creating dynamic, user-friendly dashboards. Hands-on experience with AWS (e.g., Redshift, Lambda). Strong SQL skills and experience working with large-scale datasets. Good-to-Have Qualifications Experience working as a Data Analyst in a D2C marketing or E-commerce environment. Familiarity with GCP. Experience with Google Tag Manager (GTM). Understanding of Attribution Modeling and multi-touch attribution. Qualifications: Bachelor's degree in a relevant field (e.g., Statistics, Mathematics, Business Analytics) Proficiency in data visualization tools (e.g. Lookers Studio) and statistical analysis tools (e.g. Python, SQL). Strong understanding of D2C business models and key performance indicators. Excellent communication and presentation skills. Ability to work collaboratively in a cross-functional team environment. If you are passionate about interpreting data in the D2C space, we encourage you to send your application to hiring@knya.in and be a key player in our dynamic team.
Posted 2 weeks ago
0.0 - 2.0 years
0 - 0 Lacs
Gwalior, Madhya Pradesh
On-site
Job Title: Data Engineer (Female Candidate) Location: Gwalior On-Site Experience: 2+ Years ( Female candidate have weightage ) Employment Type: Full-time Company Overview: Synram Software Services Pvt. Ltd., a subsidiary of the renowned FG International GmbH, Germany, is a premier IT solutions provider specializing in ERP systems, E-commerce platforms, Mobile Applications, and Digital Marketing. We are committed to delivering tailored solutions that drive success across various industries. Job Summary: We are seeking a highly skilled and motivated Data Engineer to join our growing team. The ideal candidate will be responsible for designing, building, and maintaining scalable data pipelines and infrastructure. You will work closely with data analysts, data scientists, and software engineers to support data-driven decision-making across the organization. Key Responsibilities: Develop, construct, test, and maintain data architectures (e.g., databases, large-scale processing systems). Design and implement ETL (Extract, Transform, Load) processes to support data integration from various sources. Optimize data systems for performance and reliability. Collaborate with cross-functional teams to understand data requirements and deliver appropriate solutions. Ensure data quality, integrity, and security across all datasets. Automate repetitive data tasks and improve pipeline efficiency. Monitor and troubleshoot production data pipelines and processes. Document systems, processes, and best practices. Required Skills & Qualifications: Bachelor’s/Master’s degree in Computer Science, Information Technology, or related field. 2+ years of experience as a Data Engineer or in a similar role. Strong proficiency in SQL, Python, or Scala. Experience with data pipeline tools (e.g., Apache Airflow, Luigi). Familiarity with big data tools (e.g., Hadoop, Spark). Experience with cloud platforms such as AWS, GCP, or Azure (preferred). Hands-on experience with data warehouse solutions like Snowflake, Redshift, or BigQuery. Good understanding of data modeling, schema design, and performance tuning. Strong problem-solving skills and attention to detail. Preferred Qualifications: Knowledge of CI/CD for data applications. Experience with containerization tools like Docker and Kubernetes. Exposure to data governance and compliance standards. Apply Now and be a part of a data-driven transformation journey with SynRam Software Pvt ltd Mail us on: career@synram.co or Call us on +91-9111381555 Job Types: Full-time, Permanent Pay: ₹20,890.34 - ₹35,845.40 per month Benefits: Flexible schedule Internet reimbursement Leave encashment Schedule: Day shift Fixed shift Weekend availability Supplemental Pay: Joining bonus Overtime pay Performance bonus Ability to commute/relocate: Gwalior, Madhya Pradesh: Reliably commute or planning to relocate before starting work (Preferred) Experience: Data Engineer: 2 years (Preferred) Language: English (Preferred) Work Location: In person Application Deadline: 20/07/2025 Expected Start Date: 12/07/2025
Posted 2 weeks ago
5.0 - 10.0 years
25 - 30 Lacs
Chennai
Work from Office
Job Summary: We are seeking a highly skilled Data Engineer to design, develop, and maintain robust data pipelines and architectures. The ideal candidate will transform raw, complex datasets into clean, structured, and scalable formats that enable analytics, reporting, and business intelligence across the organization. This role requires strong collaboration with data scientists, analysts, and cross-functional teams to ensure timely and accurate data availability and system performance. Key Responsibilities Design and implement scalable data pipelines to support real-time and batch processing. Develop and maintain ETL/ELT processes that move, clean, and organize data from multiple sources. Build and manage modern data architectures that support efficient storage, processing, and access. Collaborate with stakeholders to understand data needs and deliver reliable solutions. Perform data transformation, enrichment, validation, and normalization for analysis and reporting. Monitor and ensure the quality, integrity, and consistency of data across systems. Optimize workflows for performance, scalability, and cost-efficiency. Support cloud and on-premise data integrations, migrations, and automation initiatives. Document data flows, schemas, and infrastructure for operational and development purposes. • Apply best practices in data governance, security, and compliance. Required Qualifications & Skills: Bachelors or Masters degree in Computer Science, Data Engineering, or a related field. Proven 6+ Years experience in data engineering, ETL development, or data pipeline management. Proficiency with tools and technologies such as: SQL, Python, Spark, Scala ETL tools (e.g., Apache Airflow, Talend) Cloud platforms (e.g., AWS, GCP, Azure) Big Data tools (e.g., Hadoop, Hive, Kafka) Data warehouses (e.g., Snowflake, Redshift, BigQuery) Strong understanding of data modeling, data architecture, and data lakes. Experience with CI/CD, version control, and working in Agile environments. Preferred Qualifications: • Experience with data observability and monitoring tools. • Knowledge of data cataloging and governance frameworks. • AWS/GCP/Azure data certification is a plus.
Posted 2 weeks ago
3.0 - 6.0 years
0 Lacs
Gurugram, Haryana, India
On-site
DATABASE ENGINEERING Responsibilities: Design, develop, and manage databases on the AWS cloud platform Develop and maintain automation scripts or jobs to perform routine database tasks such as provisioning, backups, restores, and data migrations. Build and maintain automated testing frameworks for database changes and upgrades to minimize the risk of introducing errors. Implement self-healing mechanisms to automatically recover from database failures or performance degradation. Integrate database automation tools with CI/CD pipelines to enable continuous delivery and deployment of database changes. Collaborate with cross-functional teams to understand their data requirements and ensure that the databases meet their needs Implement and manage database security policies, including access control, data encryption, and backup and recovery procedures Ensure that database backups and disaster recovery procedures are in place and tested regularly Develop and maintain database documentation, including data dictionaries, data models, and technical specifications Stay up-to-date with the latest cloud technologies and trends and evaluate new tools and products that could improve database performance and scalability. Requirements: (Postgres/MySQL/SQL Server, AWS CloudFormation/CDK, Python) Bachelor degree in Computer Science, Information Technology, or a related field Minimum of 3-6 years of experience in designing, building, and administering databases on the AWS cloud platform Strong experience with Infra as Code (CloudFormation/AWS CDK) and automation experience in Python In-depth knowledge of AWS database services such as Amazon RDS, EC2, S3, Amazon Aurora, and Amazon Redshift and Postgres/Mysql/SqlServer Strong understanding of database design principles, data modelling, and normalisation Experience with database migration to AWS cloud platform Strong understanding of database security principles and best practices Excellent troubleshooting and problem-solving skills Ability to work independently and in a team environment
Posted 2 weeks ago
15.0 years
0 Lacs
Gurugram, Haryana, India
Remote
About VWO VWO is a leading Digital Experience Optimization platform trusted by over 3,000 businesses in 100+ countries, including global brands like Samsung, Vodafone, Toyota, HBO, and Domino's. What began 15 years ago as one of the world’s first A/B testing tools has since evolved into a comprehensive, enterprise-grade platform used by product, marketing, and growth teams to experiment, personalize, analyze behavior, and build exceptional digital experiences. Today, VWO offers a full-stack suite for A/B testing, multivariate testing, feature rollouts, heatmaps, session recordings, behavioural analytics, surveys, personalization, and more across web, mobile, and server-side applications — all in one unified platform. We enable teams to make confident, data-driven decisions that drive user engagement, retention, and conversion. VWO is a profitable, founder-led business with $50M+ ARR, strong EBITDA margins, and a history of capital-efficient, sustainable growth. In January 2025, Everstone Capital acquired a majority stake in the company to help accelerate our global expansion, both organically and inorganically. We are a fully remote team of 450+ people, with go-to-market teams across the Americas, Europe, and APAC, and product and engineering anchored in India. Our culture values deep thinking, fast execution, and strong ownership, with minimal bureaucracy and high autonomy. Despite our scale, we continue to operate with the agility and ambition of a startup. We are seeking a Principal Data Architect to lead our Data Platforms Team and drive innovation in data engineering, analytics, and governance. Role Overview: As a Principal Data Architect, you will be responsible for leading the design, development, and scaling of our data infrastructure. You will collaborate with product managers, data scientists, and engineers to ensure our data pipelines and architectures are robust, scalable, and aligned with business objectives. The role requires a strong background in backend development, data processing, and scalable architecture design. Key Responsibilities: ● Lead and mentor a team of data engineers, ensuring high performance and career growth. ● Architect and optimize scalable data infrastructure, ensuring high availability and reliability. ● Drive the development and implementation of data governance frameworks and best practices. ● Work closely with cross-functional teams to define and execute a data roadmap. ● Optimize data processing workflows for performance and cost efficiency. ● Ensure data security, compliance, and quality across all data platforms. ● Foster a culture of innovation and technical excellence within the data team. Required Skills & Experience: ● 10+ years of experience in data engineering, with at least 3+ years in a leadership role. ● Expertise in backend development with programming languages such as Java, PHP, Python, Node.JS, GoLang, JavaScript, HTML, and CSS. ● Proficiency in SQL, Python, and Scala for data processing and analytics. ● Strong understanding of cloud platforms (AWS, GCP, or Azure) and their data services. ● Strong foundation and expertise in HLD and LLD, as well as design patterns, preferably using Spring Boot or Google Guice ● Experience in big data technologies such as Spark, Hadoop, Kafka, and distributed computing frameworks. ● Hands-on experience with data warehousing solutions such as Snowflake, Redshift, or BigQuery ● Deep knowledge of data governance, security, and compliance (GDPR, SOC2, etc.). ● Experience in NoSQL databases like Redis, Cassandra, MongoDB, and TiDB. ● Familiarity with automation and DevOps tools like Jenkins, Ansible, Docker, Kubernetes, Chef, Grafana, and ELK. ● Proven ability to drive technical strategy and align it with business objectives. ● Strong leadership, communication, and stakeholder management skills. Preferred Qualifications : ● Experience in machine learning infrastructure or MLOps is a plus. ● Exposure to real-time data processing and analytics. ● Interest in data structures, algorithm analysis and design, multicore programming, and scalable architecture. ● Prior experience in a SaaS or high-growth tech company. Why Join Wingify? ● Work on cutting-edge data technologies and large-scale distributed systems. ● A remote-friendly work environment with flexible working hours. ● A culture of innovation, learning, and open communication. ● Competitive compensation, benefits, and growth opportunities. Join us in shaping the future of data-driven decision-making at Wingify!
Posted 2 weeks ago
5.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Festivals From India is helping ASIGN hire a Data Engineer to join their team. They are looking for an experienced Data Engineer with a strong grasp of ELT architecture and experience to help us build and maintain robust data pipelines. This is a hands-on role for someone passionate about structured data, automation, and scalable infrastructure. The ideal candidate will be responsible for sourcing data, ingesting, transforming, storing, and making data accessible and reliable for data analysis, machine learning, and reporting. You will play a key role in maintaining and evolving our data architecture and ensuring that our data flows efficiently and securely. Please note: The vetting process for this role comprises 2-3 rounds of interviews and may be followed by a brief assignment. Festivals From India is hiring for this role on behalf of the ASIGN This is an on-site, full-time position based in Chennai. Salary band for this role is available upon request. Essential Requirements: Minimum 5 years of hands-on experience in data engineering. Solid understanding and experience with ELT pipelines and modern data stack tools. Practical knowledge of one or more orchestrators (Dagster, Airflow, Prefect, etc.). Proficiency in Python and SQL. Experience working with APIs and data integration from multiple sources. Familiarity with one or more cloud data warehouses (e.g., Snowflake, BigQuery,Redshift). Strong problem-solving and debugging skills. Essential Qualifications: Bachelor’s/Master’s degree in Computer Science, Engineering, Statistics, or a related field Proven experience (5+ years) in data engineering, data integration, and data management Hands-on experience in data sourcing tools and frameworks (e.g. Scrapy, BeautifulSoup, Selenium, Playwright) Proficiency in Python and SQL for data manipulation and pipeline development Experience with cloud-based data platforms (AWS, Azure, or GCP) and data warehouse tools (e.g. Redshift, BigQuery, Snowflake) Familiarity with workflow orchestration tools (e.g. Airflow, Prefect, Dagster) Strong understanding of relational and non-relational databases (PostgreSQL,MongoDB, etc.) Solid understanding of data modeling, ETL best practices, and data governance principles Systems knowledge and experience working with Docker. Strong and creative problem-solving skills and the ability to think critically about data engineering solutions. Effective communication and collaboration skills Ability to work independently and as part of a team in a fast-paced, dynamic environment.
Posted 2 weeks ago
5.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Job Summary: We are seeking a skilled Data Engineer to join our dynamic team. In this role, will be responsible for implementing and maintaining scalable data pipelines and infrastructure on AWS cloud platform. The ideal candidate will have experience with AWS services, particularly in the realm of big data processing and analytics. The role involves working closely with cross-functional teams to support data-driven decision-making and focus on delivering business objectives while improving efficiency and ensuring high service quality. Key Responsibilities: Develop, and maintain large-scale data pipelines that can handle large datasets from multiple sources. Implement real-time data replication and batch processing of data using distributed computing platforms like Spark, Kafka, etc. Optimize performance of data processing jobs and ensure system scalability and reliability. Collaborate with DevOps teams to manage infrastructure, including cloud environments like AWS Collaborate with data scientists, analysts, and business stakeholders to develop tools and platforms that enable advanced analytics and reporting. Strong analytical and problem-solving skills with attention to detail. To maintain a healthy working relationship with the business partners/users and other MLI departments Responsible for overall performance, cost and delivery of technology solutions Key Technical competencies/skills required: Proficiency in programming languages such as SQL, Python, PySpark, SQL/PLSQL for implementing data pipelines and ETL processes. Hands-on knowledge with AWS services such as S3, Lambda, EMR, Glue, Redshift, Athena, etc. Knowledge of data modelling and knowledge of modern file and table formats. Cloud/hybrid cloud (preferable AWS) solution for data strategy for Data lake, BI and Analytics Experience with data warehousing concepts. Desired qualifications and experience: Bachelor’s degree in computer science, Engineering, or related field (Master’s preferred). Proven experience of 5+ years as a Data Engineer or similar role with a strong focus on AWS cloud Strong analytical and problem-solving skills with attention to detail. Excellent communication and collaboration skills.
Posted 2 weeks ago
0.0 years
4 - 16 Lacs
Chennai District, Tamil Nadu
On-site
Role : Database Admin / Developer EXP: 7+years NP: 0 to 45days Location : Chennai Role : Database Admin / Developer EXP: 7+years NP: 0 to 45days Location : Chennai Max CTC: 15LPA Mandatory skills : Expertise in PostgreSQL administration/SQL/Oracle(Any) AWS Aurora / Redshift PostgreSQL Shell /Python scripting Added advantage with below skills : SQL Server Redshift OpenSearch/Elasticsearch Mandatory skills : Expertise in PostgreSQL administration/SQL/Oracle(Any) AWS Aurora / Redshift PostgreSQL Shell /Python scripting Added advantage with below skills : SQL Server Redshift OpenSearch/Elasticsearch Job Types: Full-time, Permanent Pay: ₹466,447.58 - ₹1,695,164.04 per year Ability to commute/relocate: Chennai District, Tamil Nadu: Reliably commute or planning to relocate before starting work (Required) Application Question(s): Experience in Database administration Current CTC Expected CTC Notice Period Work Location: In person
Posted 2 weeks ago
10.0 years
0 Lacs
Mumbai, Maharashtra, India
On-site
We’re looking for a Cloud Security Lead with 10+ years of experience , including a minimum of 5 years in cloud security . Job details: Role/Position: Cloud Security Lead Location: Thane (WFO) Budget: 25 LPA 🔐 Key Responsibilities: Lead cloud security assessments and risk mitigation strategies across AWS, Azure, GCP , and other platforms. Design, implement, and configure secure cloud environments. Ensure compliance with global standards like ISO 27001, NIST 800-53, PCI DSS, SOC2 , etc. Troubleshoot and resolve cloud security issues in multi-cloud environments. Collaborate with engineering, audit, and IT GRC teams. Drive documentation, policy development, and internal training on cloud security best practices. Stay updated with emerging threats and technologies in cloud security. ✅ Ideal Candidate Should Have: Strong expertise in AWS, Azure, Alibaba Cloud , etc. Hands-on experience with services like EC2, S3, RDS, Redshift, CloudFormation, and Auto Scaling. Proven track record in vulnerability management , security architecture , and incident response . Excellent leadership , communication , and stakeholder management skills. Regards, Komal
Posted 2 weeks ago
0 years
0 Lacs
Pune, Maharashtra, India
On-site
🚨 We're Hiring: Senior BI Consultant 📍 Location: Pune | 🕐 Full-Time | 🌐 On-Site/Hybrid We're hiring a Senior BI Consultant for a reputed company, who thrives on building scalable data solutions and has hands-on expertise in Python, AWS (Lambda, S3, Redshift), ETL pipelines , and data science methodologies . 🔍 Key Responsibilities 💡 Data Integration & Pipelines Design and manage ETL/ELT pipelines to ingest and process data in AWS Redshift Automate workflows using Python and AWS Lambda Ensure robust, scalable, and reusable data solutions 📊 Analytics & Data Science Analyze large datasets to extract business insights Integrate predictive models and advanced analytics into BI frameworks Conduct A/B testing and statistical modeling to support decisions 🗃️ Data Warehousing Architect and maintain optimized data warehouses and marts Implement best practices in data modeling and performance tuning 🔐 Data Governance & Compliance Define and enforce data governance policies Ensure data security, cataloging, and compliance with standards Collaborate cross-functionally to implement robust access controls ✅ Key Requirements Strong proficiency in Python , AWS Lambda/S3/Redshift , and SQL Solid understanding of ETL workflows , data modeling , and data warehousing concepts Experience with machine learning/statistical models in production Excellent communication and stakeholder management skills 💼 Why This Opportunity Stands Out Work with a global team and international clients High-impact role with ownership of end-to-end BI solutions Opportunity to lead and shape data initiatives from the ground up Collaborative work culture focused on innovation and growth 📩 Apply Now If this sounds like the opportunity you’ve been waiting for, we’d love to hear from you! 📧 Send your resume to placements@opinegroup.com or DM me. #BIJobs #SeniorBIConsultant #DataScience #AWS #Python #Redshift #HiringNow #iKORP #DataJobs #PuneJobs
Posted 2 weeks ago
3.0 - 8.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Job Summary Job Description: AWS Data Engineer About the Role We are looking for a highly technical and experienced AWS Data Engineer to join our team. The successful candidate will be responsible for designing, developing, and deploying machine learning models to solve complex business problems by leveraging large datasets on the AWS platform. This role requires working across the entire ML lifecycle, from data collection and preprocessing to model training, evaluation, and deployment using AWS AI Services. The goal is to create efficient self-learning applications capable of evolving over time. If you are passionate about data engineering and machine learning, possess strong programming skills, and have a deep understanding of statistical methods and various ML algorithms, we want to hear from you. Responsibilities Design, develop, and deploy machine learning models on AWS to address complex business challenges. Work across the ML lifecycle, including data collection, preprocessing, model training, evaluation, and deployment using services such as Amazon SageMaker, AWS Glue, and Amazon S3. Leverage large datasets to derive insights and create data-driven solutions using AWS analytics tools. Collaborate with cross-functional teams to understand business requirements and translate them into technical solutions on AWS. Optimize and maintain data pipelines and systems on AWS to ensure efficient data processing and storage. Implement and monitor model performance, making necessary adjustments to improve accuracy and efficiency using AWS monitoring tools. Keep up-to-date with the latest advancements in AWS AI and machine learning technologies. Document processes and models to ensure transparency and reproducibility. Preferred Qualifications Bachelor's or Master's degree in Computer Science, Data Science, Statistics, or a related field. Proven experience as a Data Engineer or in a similar role, with a strong focus on machine learning and AWS, ranging from 3 to 8 years of relevant experience. Proficiency in programming languages such as Python, with experience in using AWS SDKs and APIs. Deep understanding of statistical methods and various machine learning algorithms. Experience with AWS AI and ML frameworks and libraries, such as Amazon SageMaker, AWS Glue, and AWS Lambda. Strong analytical and problem-solving skills. Excellent communication and collaboration abilities. Knowledge of big data technologies and tools, such as Hadoop, Spark, or Kafka, is a plus. Familiarity with AWS cloud platform and services like Amazon EC2, Amazon RDS, and Amazon Redshift is an advantage. Ability to work independently and manage multiple projects simultaneously.
Posted 2 weeks ago
0.0 - 6.0 years
0 Lacs
Saidapet, Chennai, Tamil Nadu
On-site
Job Information Date Opened 07/11/2025 City Saidapet Country India Job Role Data Engineering State/Province Tamil Nadu Industry IT Services Job Type Full time Zip/Postal Code 600096 Job Description Introduction to the Role: Are you passionate about unlocking the power of data to drive innovation and transform business outcomes? Join our cutting-edge Data Engineering team and be a key player in delivering scalable, secure, and high-performing data solutions across the enterprise. As a Data Engineer , you will play a central role in designing and developing modern data pipelines and platforms that support data-driven decision-making and AI-powered products. With a focus on Python , SQL , AWS , PySpark , and Databricks , you'll enable the transformation of raw data into valuable insights by applying engineering best practices in a cloud-first environment. We are looking for a highly motivated professional who can work across teams to build and manage robust, efficient, and secure data ecosystems that support both analytical and operational workloads. Accountabilities: Design, build, and optimize scalable data pipelines using PySpark , Databricks , and SQL on AWS cloud platforms . Collaborate with data analysts, data scientists, and business users to understand data requirements and ensure reliable, high-quality data delivery. Implement batch and streaming data ingestion frameworks from a variety of sources (structured, semi-structured, and unstructured data). Develop reusable, parameterized ETL/ELT components and data ingestion frameworks. Perform data transformation, cleansing, validation, and enrichment using Python and PySpark . Build and maintain data models, data marts, and logical/physical data structures that support BI, analytics, and AI initiatives. Apply best practices in software engineering, version control (Git), code reviews, and agile development processes. Ensure data pipelines are well-tested, monitored, and robust with proper logging and alerting mechanisms. Optimize performance of distributed data processing workflows and large datasets. Leverage AWS services (such as S3, Glue, Lambda, EMR, Redshift, Athena) for data orchestration and lakehouse architecture design. Participate in data governance practices and ensure compliance with data privacy, security, and quality standards. Contribute to documentation of processes, workflows, metadata, and lineage using tools such as Data Catalogs or Collibra (if applicable). Drive continuous improvement in engineering practices, tools, and automation to increase productivity and delivery quality. Essential Skills / Experience: 4 to 6 years of professional experience in Data Engineering or a related field. Strong programming experience with Python and experience using Python for data wrangling, pipeline automation, and scripting. Deep expertise in writing complex and optimized SQL queries on large-scale datasets. Solid hands-on experience with PySpark and distributed data processing frameworks. Expertise working with Databricks for developing and orchestrating data pipelines. Experience with AWS cloud services such as S3 , Glue , EMR , Athena , Redshift , and Lambda . Practical understanding of ETL/ELT development patterns and data modeling principles (Star/Snowflake schemas). Experience with job orchestration tools like Airflow , Databricks Jobs , or AWS Step Functions . Understanding of data lake, lakehouse, and data warehouse architectures. Familiarity with DevOps and CI/CD tools for code deployment (e.g., Git, Jenkins, GitHub Actions). Strong troubleshooting and performance optimization skills in large-scale data processing environments. Excellent communication and collaboration skills, with the ability to work in cross-functional agile teams. Desirable Skills / Experience: AWS or Databricks certifications (e.g., AWS Certified Data Analytics, Databricks Data Engineer Associate/Professional). Exposure to data observability , monitoring , and alerting frameworks (e.g., Monte Carlo, Datadog, CloudWatch). Experience working in healthcare, life sciences, finance, or another regulated industry. Familiarity with data governance and compliance standards (GDPR, HIPAA, etc.). Knowledge of modern data architectures (Data Mesh, Data Fabric). Exposure to streaming data tools like Kafka, Kinesis, or Spark Structured Streaming. Experience with data visualization tools such as Power BI, Tableau, or QuickSight. Work Environment & Collaboration: We value a hybrid, collaborative environment that encourages shared learning and innovation. You will work closely with product owners, architects, analysts, and data scientists across geographies to solve real-world business problems using cutting-edge technologies and methodologies. We encourage flexibility while maintaining a strong in-office presence for better team synergy and innovation. About Agilisium - Agilisium, is an AWS technology Advanced Consulting Partner that enables companies to accelerate their "Data-to-Insights-Leap. With $25+ million in annual revenue and over 40% year-over-year growth, Agilisium is one of the fastest-growing IT solution providers in Southern California. Our most important asset? People. Talent management plays a vital role in our business strategy. We’re looking for “drivers”; big thinkers with growth and strategic mindset — people who are committed to customer obsession, aren’t afraid to experiment with new ideas. And we are all about finding and nurturing individuals who are ready to do great work. At Agilisium, you’ll collaborate with great minds while being challenged to meet and exceed your potential
Posted 2 weeks ago
0 years
0 Lacs
Pune, Maharashtra, India
On-site
Company Description Zimetrics is a technology services and solutions provider specializing in Data, AI, and Digital. We help enterprises leverage the economic potential and business value of data from systems, machines, connected devices, and human-generated content. Our core principles are Integrity, Intellect, and Ingenuity, guiding our value system, engineering expertise, and organizational behavior. We are problem solvers and innovators who challenge conventional wisdom and believe in possibilities. Key Responsibilities Design scalable and secure cloud-based data architecture solutions Lead data modeling, integration, and migration strategies across platforms Engage directly with clients to understand business needs and translate them into technical solutions Support sales/pre-sales teams with solution architecture, technical presentations, and proposals Collaborate with cross-functional teams including engineering, BI, and product Ensure best practices in data governance, security, and performance optimization Key Requirements Strong experience with Cloud platforms (AWS, Azure, or GCP) Deep understanding of Data Warehousing concepts and tools (Snowflake, Redshift, BigQuery, etc.) Proven expertise in data modeling (conceptual, logical, and physical) Excellent communication and client engagement skills Experience in pre-sales or solution consulting is a strong advantage Ability to present complex technical concepts to non-technical stakeholders (ref:hirist.tech)
Posted 2 weeks ago
3.0 - 5.0 years
0 Lacs
Mumbai Metropolitan Region
On-site
Company Overview Bixware Technologies Pvt Ltd is a rapidly growing technology firm with offices in Coimbatore and Mumbai. We specialize in delivering Microsoft BI solutions and SAP resourcing services, while also building capabilities across Open Source technologies, modern web stacks, cloud infrastructure, and Office 365 ecosystems. Our client-centric approach and cross-domain expertise enable us to deliver scalable, secure, and efficient software Overview : Responsibilities We are seeking a skilled Python Developer to join our dynamic team. The ideal candidate will have strong programming fundamentals, deep knowledge of Python ecosystems, and hands-on experience in designing, building, and maintaining web applications and data-driven services. This role demands both technical proficiency and the ability to collaborate across Responsibilities : Design, develop, and maintain scalable and reusable Python-based applications. Implement backend components using frameworks like Django or Flask. Integrate applications with relational databases like MS SQL and Amazon Redshift, including writing efficient SQL queries. Develop and consume RESTful APIs and third-party integrations. Utilize data science libraries (e.g., pandas, NumPy, scikit-learn, matplotlib) for data processing, analysis, and visualization. Apply object-oriented programming (OOP) principles to write clean, modular, and testable code. Perform unit testing, code reviews, and debugging to ensure high software quality. Collaborate with front-end developers, DevOps teams, and business stakeholders to ensure alignment between technical implementation and business requirements. Use version control systems like Git to manage code repositories and support CI/CD practices. Follow agile development methodologies and contribute to sprint planning, estimations, and Skills & Qualifications : Bachelors degree in Computer Science, Information Technology, or a related discipline. 3 - 5 years of professional experience in Python development. Proficiency in Python core concepts and best practices. Solid experience with at least one Python web framework (preferably Django or Flask). Strong understanding of SQL databases (MS SQL, Redshift) and data modeling. Experience in developing and consuming RESTful APIs. Exposure to data science and analytics libraries. Familiarity with version control systems such as Git and tools like GitHub or GitLab. Working knowledge of testing frameworks (e.g., PyTest, unittest) and deployment strategies. Excellent analytical thinking, problem-solving skills, and attention to detail. Strong verbal and written communication skills; ability to work independently and collaboratively. (ref:hirist.tech)
Posted 2 weeks ago
5.0 - 9.0 years
0 Lacs
maharashtra
On-site
Data Scientist (5+ Years of Experience) We are seeking a highly motivated Data Scientist with over 5 years of hands-on experience in data mining, statistical analysis, and developing high-quality machine learning models. The ideal candidate will have a passion for solving real-world problems using data-driven approaches and possess strong technical expertise across various data science domains. Key Responsibilities: Apply advanced data mining techniques and statistical analysis to extract actionable insights. Design, develop, and deploy robust machine learning models to address complex business challenges. Conduct A/B and multivariate experiments to evaluate model performance and optimize outcomes. Monitor, analyze, and enhance the performance of machine learning models post-deployment. Collaborate cross-functionally to build customer cohorts for CRM campaigns and conduct market basket analysis. Stay updated with state-of-the-art techniques in NLP, particularly within the e-commerce domain. Required Skills & Qualifications: Programming & Tools: Proficient in Python, PySpark, and SQL for data manipulation and analysis. Machine Learning & AI: Strong experience with ML libraries (e.g., Scikit-learn, TensorFlow, PyTorch) and expertise in NLP, Computer Vision, Recommender Systems, and Optimization techniques. Cloud & Big Data: Hands-on experience with AWS services, including Glue, EKS, S3, SageMaker, and Redshift. Model Deployment: Experience deploying pre-trained models from platforms like Hugging Face and AWS Bedrock. DevOps & MLOps: Understanding of Git, Docker, CI/CD pipelines, and deploying models with frameworks such as FastAPI. Advanced NLP: Experience in building, retraining, and optimizing NLP models for diverse use cases. Preferred Qualifications: Strong research mindset with a keen interest in exploring new data science methodologies. Background in e-commerce analytics is a plus. If youre passionate about leveraging data to drive impactful business decisions and thrive in a dynamic environment, wed love to hear from you!,
Posted 2 weeks ago
7.0 years
0 Lacs
Gurgaon, Haryana, India
On-site
Who We Are As a Company GoKwik was founded in 2020 with one simple mission, to democratize the shopping experience and increase the GMV realization for e-commerce businesses. The company is backed by Sequoia Capital India, Matrix Partners India, RTP Global, and marquee angels. GoKwik is an e-commerce enablement company focussed predominantly on solving crucial e-commerce issues like boosting conversion rates across the e-commerce funnel and reducing RTO for our partners. It leverages AI/ML technologies to solve hard-hitting problems like RTO to increase CoD conversion rates. GoKwik's 1-click Kwik checkout improves checkout conversion rates ensuring higher GMV realization and reduced marketing CAC. What we do is very unique with zero immediate competition in India today and therefore, building a team of Real Rockstars in their field to fuel this profit making start-up, which is growing at a rapid pace Summary The Data engineering manager will be responsible for handling a team of data engineers and closely working with product managers, data scientists, business intelligence teams, and SDEs to design and launch data-driven strategies across the organization. He will also be responsible for designing overall data architecture that drives insights . You’ll Spend Time On The Following Lead a team of data engineers and guide them with the best data strategies in line with data needs. Identifying, designing, and implementing process improvements that include building/re-engineering data models, data architectures, pipelines, and data applications Continuously look for data optimization processes and oversee data management, governance, security, and analysis Hire the best talent, identify the potential of each team member and offer technical guidance, leadership, and advice on planning, designing, and implementing data solutions Manage data delivery through high-performing dashboards, visualizations, and reports Ensure data quality and security across every product vertical and related areas Design, create and launch new data models and pipelines as per needs Act as a project manager for data projects, mentor and grow the team by hiring skilled data engineers Work towards achieving high performance, operational excellence, accuracy, and reliability of the overall system Mentor and grow data warehouse, data modeling, and data visualization team to correctly establish a data-driven culture Utilize tools and technologies to create data architecture that supports new data initiatives and is useful in next-gen products Ensure test-driven products/pipelines that are easily maintainable and reusable Design and build an infrastructure for extraction, transformation, and loading of data from a wide range of data sources Overall build and maintain data foundations that include tools, infrastructure, and pipelines that help the marketing and sales team Increase automation and build analytic solutions at scale to serve the business requirements We’re Excited About You If You Have Bachelor’s/Master’s degree in Computer Science, Mathematics or any alternative computer programming training At least 7 years of experience working in Data Engineering field Experience managing engineers and guiding a team of engineers through project planning, execution, and quality control stages. Strong project management and organizational skills and the ability to work independently in a fast-paced, quickly changing environment. Ability to keep up with several projects at once and understand the impact of projects within a larger system. Advanced working SQL knowledge and experience working with relational databases, query authoring (SQL), as well as working familiarity with a variety of databases. Experience building data pipelines, architectures, and data sets from raw, loosely structured data. Experience building processes supporting data transformation, data structures, metadata, dependency, and workload management. Experience supporting and working with cross-functional teams in a dynamic environment. Experience with relational SQL databases, including Postgres and MySQL. Experience with object-oriented design in Python Experience with data pipeline and workflow management tools Experience with AWS cloud services: EC2, RDS, Redshift, Glue, S3 Both verbal and written communication skills Efficiency in handling technical issues expertly, producing new design ideas and substitutions where required Appreciate and add to a collaborative team effort Some Important Traits – We look out for a Person in this role Independent, resourceful, analytical, and able to solve problems effectively Ability to be flexible, agile, and thrive in chaos Excellent oral and written communication skills Our Core Value Cultures Merchant 1st Innovation Talent The pace of our growth is incredible – if you want to tackle hard and interesting problems at scale, and create an impact within an entrepreneurial environment, Come join us!
Posted 2 weeks ago
4.0 years
0 Lacs
Gurgaon, Haryana, India
On-site
Who We Are As a Company GoKwik was founded in 2020 with one simple mission, to democratize the shopping experience and increase the GMV realization for e-commerce businesses. The company is backed by Sequoia Capital India, Matrix Partners India, RTP Global, and marquee angels. GoKwik is an e-commerce enablement company focussed predominantly on solving crucial e-commerce issues like boosting conversion rates across the e-commerce funnel and reducing RTO for our partners. It leverages AI/ML technologies to solve hard-hitting problems like RTO to increase CoD conversion rates. GoKwik's 1-click Kwik checkout improves checkout conversion rates ensuring higher GMV realization and reduced marketing CAC. What we do is very unique with zero immediate competition in India today and therefore, building a team of Real Rockstars in their field to fuel this profit making start-up, which is growing at a rapid pace Summary The Data Engineer2 will be closely working with product managers, data scientists, business intelligence teams, and SDEs to design and launch data-driven strategies across the organization. You’ll Spend Time On The Following Identifying, designing, and implementing process improvements that include building/re-engineering data models, data architectures, pipelines, and data applications Continuously look for data optimization processes and oversee data management, governance, security, and analysis Ensure data quality and security across every product vertical and related areas Design, create and launch new data models and pipelines as per needs Work towards achieving high performance, operational excellence, accuracy, and reliability of the overall system Utilize tools and technologies to create data architecture that supports new data initiatives and is useful in next-gen products Ensure test-driven products/pipelines that are easily maintainable and reusable Design and build an infrastructure for extraction, transformation, and loading of data from a wide range of data sources Overall build and maintain data foundations that include tools, infrastructure, and pipelines that help the marketing and sales team Increase automation and build analytic solutions at scale to serve the business requirements We’re Excited About You If You Have Bachelor’s/Master’s degree in Computer Science, Mathematics or any alternative computer programming training At least 4 years of experience working in Data Engineering field Ability to keep up with several projects at once and understand the impact of projects within a larger system. Advanced working SQL knowledge and experience working with relational databases, query authoring (SQL), as well as working familiarity with a variety of databases. Experience building data pipelines, architectures, and data sets from raw, loosely structured data. Experience building processes supporting data transformation, data structures, metadata, dependency, and workload management. Experience supporting and working with cross-functional teams in a dynamic environment. Experience with relational SQL databases, including Postgres and MySQL. Experience with object-oriented design in Python Experience with data pipeline and workflow management tools Experience with AWS cloud services: EC2, RDS, Redshift, Glue, S3 Both verbal and written communication skills Efficiency in handling technical issues expertly, producing new design ideas and substitutions where required Appreciate and add to a collaborative team effort Some Important Traits – We look out for a Person in this role Independent, resourceful, analytical, and able to solve problems effectively Ability to be flexible, agile, and thrive in chaos Excellent oral and written communication skills Our Core Value Cultures Merchant 1st Innovation Talent The pace of our growth is incredible – if you want to tackle hard and interesting problems at scale, and create an impact within an entrepreneurial environment, Come join us!
Posted 2 weeks ago
13.0 - 17.0 years
0 Lacs
pune, maharashtra
On-site
You are an experienced professional with over 13 years of experience in engaging with clients and translating their business needs into technical solutions. You have a proven track record of working with cloud services on platforms like AWS, Azure, or GCP. Your expertise lies in utilizing AWS data services such as Redshift, Glue, Athena, and SageMaker. Additionally, you have a strong background in generative AI frameworks like GANs and VAEs and possess advanced skills in Python, including libraries like Pandas, NumPy, Scikit-learn, and TensorFlow. Your role involves designing and implementing advanced AI solutions, focusing on areas like NLP and innovative ML algorithms. You are proficient in developing and deploying NLP models and have experience in enhancing machine learning algorithms. Your knowledge extends to MLOps principles, best practices, and the development and maintenance of CI/CD pipelines. Your problem-solving skills enable you to analyze complex data sets and derive actionable insights. Moreover, your excellent communication skills allow you to effectively convey technical concepts to non-technical stakeholders. In this role, you will be responsible for understanding clients" business use cases and technical requirements, translating them into technical designs that elegantly meet their needs. You will be instrumental in mapping decisions with requirements, identifying optimal solutions, and setting guidelines for NFR considerations during project implementation. Your tasks will include writing and reviewing design documents, reviewing architecture and design aspects, and ensuring adherence to best practices. To excel in this position, you should hold a bachelor's or master's degree in computer science, Information Technology, or a related field. Additionally, relevant certifications in AI, cloud technologies, or related areas would be advantageous. Your ability to innovate, design, and implement cutting-edge solutions will be crucial in this role, as well as your skill in technology integration and problem resolution through systematic analysis. Conducting POCs to validate suggested designs and technologies will also be part of your responsibilities.,
Posted 2 weeks ago
5.0 - 10.0 years
10 - 20 Lacs
Hyderabad, Pune, Gurugram
Hybrid
Role: Data Engineer Experience: 5+ Years Location: Pune, Gurgaon & Bangalore Hybrid Shift Time: 12:00 PM - 10:00 PM Must have: Experience working in AWS, Redshift, Python Prior exposure to Data Ingestion and Curation work (such as working with Data Lakehouse) Knowledge in SQL for purpose of data analysis/investigation Help and support the Data Product Owner) to manage and deliver on the product technical roadmap Ability to digest and understand what the data is, how it is derived, meaning/context around the data itself and how the data fits into NFLs data model Working knowledge of Confluence and JIRA Good to have: Masters degree in computer science, statistics, or related discipline 5+ years as a data/business analyst or business intelligence developer/analytics engineer Proficiency and/or certification in Cloud Data Technologies Hands on experience on API Integration and One Trust Comfortable making decisions and leading Familiar with version control and relational databases Superior communication skills both oral and written Positive contributor, strong team member, loves to work with and empower others Collaborates with a team Time management skills Project management skills Responsibilities: Develop data pipeline using python, SQL on AWS platform. Document and capture the use cases, business logics/rules for the assigned data domains and also working with Data Analyst and Data Product Owners across domains to ensure alignment across the entire data platform. Gather and capture the technical specifications for the incorporation of a variety of data sources into the model and working with internal and external partners and vendor to understand and capture the integration method and pattern. Ensure Specifications covers various aspect of how to integrate the data including any transformations/logics required for data validation, standardizations, curations and reporting to fulfil the relevant use cases Work with internal stakeholders to define and capture any fulfilment requirements such as outbound data deliveries, reporting and metrics. Provide support during UAT and release management tasks such as smoke testing against requirements. Prioritize to manage ad-hoc requests in parallel with ongoing sprints Participate with the team to execute sound solutions and approaches to meet business expectations in an efficient manner; Work with Data Engineers, Data Architects, Data Governance and QA to create and review the pipelines, data ingestion, storage, wrangling, cataloguing, quality, curation of various data sources. Work with the Data Product Owners to help manage and deliver on the product technical roadmap for the data platform Experienced with Scrum and Agile Methodologies to coordinate global delivery teams, run scrum ceremonies, manage backlog items, and handle escalations Work with the Data Product Owners to help manage and deliver on the product technical roadmap for the data platform Education: BE/B.Tech/MS/M.Tech/ME from reputed institute. Every individual comes with a different set of skills and qualities so even if you dont tick all the boxes for the role today, we urge you to apply as there might be a suitable/unique role for you tomorrow!
Posted 2 weeks ago
0 years
0 Lacs
Pune, Maharashtra, India
On-site
Position: BI Developer and Data Analyst: Skills: Power BI, Databricks, SQL, Python, ETL, RedShift or Athena, AWS Services (beyond QuickSight) Experience: 4+ Responsibilities Design, develop, and maintain interactive and insightful dashboards using QuickSight. Conduct advanced data analysis to identify trends, patterns, and anomalies, providing meaningful interpretations and recommendations. Collaborate with stakeholders across different departments to understand their data needs and translate them into effective analytical solutions. Write and optimize SQL queries to extract, transform, and load data from various data sources. Utilize Python for data manipulation, automation of tasks, and statistical analysis. Ensure data accuracy, integrity, and consistency across all dashboards and analyses. Document dashboard specifications, data sources, and analytical methodologies. Stay up-to-date with the latest trends and best practices in data visualization and analytics. Qualifications Bachelor's degree in a quantitative field such as Data Science, Statistics, Mathematics, Computer Science, or a related discipline. Required Skills Data visualization best practices. Proven experience in developing advanced dashboards and performing data analysis. Ability to create clear, intuitive, and impactful visualizations (charts, graphs, tables, KPIs) that effectively communicate insights. Extensive experience with AWS QuickSight (or similar BI tool): Hands-on experience in building, publishing, and maintaining interactive dashboards and reports. QuickSight data sources: Experience connecting QuickSight to various data sources, especially those common in AWS environments (e.g., S3, Redshift, Athena, RDS, Glue). QuickSight dataset creation and management: Proficiency in creating, transforming, and optimizing datasets within QuickSight, including calculated fields, parameters, and filters. Performance optimization: Knowledge of how to optimize QuickSight dashboards and data for speed and scalability. Preferred Skills Experience with other data visualization tools. Familiarity with machine learning concepts.
Posted 2 weeks ago
4.0 years
0 Lacs
Pune, Maharashtra, India
On-site
The Data Engineer is accountable for developing high quality data products to support the Bank’s regulatory requirements and data driven decision making. A Data Engineer will serve as an example to other team members, work closely with customers, and remove or escalate roadblocks. By applying their knowledge of data architecture standards, data warehousing, data structures, and business intelligence they will contribute to business outcomes on an agile team. Responsibilities Developing and supporting scalable, extensible, and highly available data solutions Deliver on critical business priorities while ensuring alignment with the wider architectural vision Identify and help address potential risks in the data supply chain Follow and contribute to technical standards Design and develop analytical data models Required Qualifications & Work Experience First Class Degree in Engineering/Technology (4-year graduate course) 3 to 4 years’ experience implementing data-intensive solutions using agile methodologies Experience of relational databases and using SQL for data querying, transformation and manipulation Experience of modelling data for analytical consumers Ability to automate and streamline the build, test and deployment of data pipelines Experience in cloud native technologies and patterns A passion for learning new technologies, and a desire for personal growth, through self-study, formal classes, or on-the-job training Excellent communication and problem-solving skills T echnical Skills (Must Have) ETL: Hands on experience of building data pipelines. Proficiency in at least one of the data integration platforms such as Ab Initio, Apache Spark, Talend and Informatica Big Data: Exposure to ‘big data’ platforms such as Hadoop, Hive or Snowflake for data storage and processing Data Warehousing & Database Management: Understanding of Data Warehousing concepts, Relational (Oracle, MSSQL, MySQL) and NoSQL (MongoDB, DynamoDB) database design Data Modeling & Design: Good exposure to data modeling techniques; design, optimization and maintenance of data models and data structures Languages: Proficient in one or more programming languages commonly used in data engineering such as Python, Java or Scala DevOps: Exposure to concepts and enablers - CI/CD platforms, version control, automated quality control management Technical Skills (Valuable) Ab Initio: Experience developing Co>Op graphs; ability to tune for performance. Demonstrable knowledge across full suite of Ab Initio toolsets e.g., GDE, Express>IT, Data Profiler and Conduct>IT, Control>Center, Continuous>Flows Cloud: Good exposure to public cloud data platforms such as S3, Snowflake, Redshift, Databricks, BigQuery, etc. Demonstratable understanding of underlying architectures and trade-offs Data Quality & Controls: Exposure to data validation, cleansing, enrichment and data controls Containerization: Fair understanding of containerization platforms like Docker, Kubernetes File Formats: Exposure in working on Event/File/Table Formats such as Avro, Parquet, Protobuf, Iceberg, Delta Others: Basics of Job scheduler like Autosys. Basics of Entitlement management Certification on any of the above topics would be an advantage. ------------------------------------------------------ Job Family Group: Technology ------------------------------------------------------ Job Family: Digital Software Engineering ------------------------------------------------------ Time Type: Full time ------------------------------------------------------ Most Relevant Skills Please see the requirements listed above. ------------------------------------------------------ Other Relevant Skills For complementary skills, please see above and/or contact the recruiter. ------------------------------------------------------ Citi is an equal opportunity employer, and qualified candidates will receive consideration without regard to their race, color, religion, sex, sexual orientation, gender identity, national origin, disability, status as a protected veteran, or any other characteristic protected by law. If you are a person with a disability and need a reasonable accommodation to use our search tools and/or apply for a career opportunity review Accessibility at Citi. View Citi’s EEO Policy Statement and the Know Your Rights poster.
Posted 2 weeks ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
31458 Jobs | Dublin
Wipro
16542 Jobs | Bengaluru
EY
10788 Jobs | London
Accenture in India
10711 Jobs | Dublin 2
Amazon
8660 Jobs | Seattle,WA
Uplers
8559 Jobs | Ahmedabad
IBM
7988 Jobs | Armonk
Oracle
7535 Jobs | Redwood City
Muthoot FinCorp (MFL)
6170 Jobs | New Delhi
Capgemini
6091 Jobs | Paris,France