Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
5.0 - 8.0 years
5 - 15 Lacs
Pune
Work from Office
Role & responsibilities Design and implement scalable ELT pipelines using DBT and Snowflake Develop and optimize complex SQL queries and transformations Work with data loading/integration tools like StreamSets Collaborate with stakeholders to gather business requirements and translate them into technical solutions Version control and CI/CD using Git Schedule and monitor workflows using Apache Airflow (preferred) Leverage Python for custom data manipulation, scripting, and automation Ensure data quality, integrity, and availability across various business use cases Preferred candidate profile Strong expertise in DBT (Data Build Tool) Hands-on experience with Snowflake and ELT processing Proficiency in SQL Good to Have Skills: Experience with StreamSets or other data ingestion tools Working knowledge of Airflow for orchestration Familiarity with Python for data engineering tasks Strong understanding of Git and version control practices Exposure to Agile methodologies
Posted 16 hours ago
8.0 - 13.0 years
15 - 25 Lacs
Hyderabad, Pune
Hybrid
Role & responsibilities Job Description - Snowflake Senior Developer Experience: 8+ years Location: India, Hybrid Employment Type: Full-time Job Summary We are seeking a skilled Snowflake Developer with 8+ years of experience in designing, developing, and optimizing Snowflake data solutions. The ideal candidate will have strong expertise in Snowflake SQL, ETL/ELT pipelines, and cloud data integration. This role involves building scalable data warehouses, implementing efficient data models, and ensuring high-performance data processing in Snowflake. Key Responsibilities 1. Snowflake Development & Optimization Design and develop Snowflake databases, schemas, tables, and views following best practices. Write complex SQL queries, stored procedures, and UDFs for data transformation. Optimize query performance using clustering, partitioning, and materialized views. Implement Snowflake features (Time Travel, Zero-Copy Cloning, Streams & Tasks). 2. Data Pipeline Development Build and maintain ETL/ELT pipelines using Snowflake, Snowpark, Python, or Spark. Integrate Snowflake with cloud storage (S3, Blob) and data ingestion tools (Snowpipe). Develop CDC (Change Data Capture) and real-time data processing solutions. 3. Data Modeling & Warehousing Design star schema, snowflake schema, and data vault models in Snowflake. Implement data sharing, secure views, and dynamic data masking. Ensure data quality, consistency, and governance across Snowflake environments. 4. Performance Tuning & Troubleshooting Monitor and optimize Snowflake warehouse performance (scaling, caching, resource usage). Troubleshoot data pipeline failures, latency issues, and query bottlenecks. Work with DevOps teams to automate deployments and CI/CD pipelines. 5. Collaboration & Documentation Work closely with data analysts, BI teams, and business stakeholders to deliver data solutions. Document data flows, architecture, and technical specifications. Mentor junior developers on Snowflake best practices. Required Skills & Qualifications 8+ years in database development, data warehousing, or ETL. 4+ years of hands-on Snowflake development experience. Strong SQL or Python skills for data processing. Experience with Snowflake utilities (SnowSQL, Snowsight, Snowpark). Knowledge of cloud platforms (AWS/Azure) and data integration tools (Coalesce, Airflow, DBT). Certifications: SnowPro Core Certification (preferred). Preferred Skills Familiarity with data governance and metadata management. Familiarity with DBT, Airflow, SSIS & IICS Knowledge of CI/CD pipelines (Azure DevOps). If Interested, Kindly share update cv on - Himanshu.mehra@thehrsolutions.in
Posted 17 hours ago
0.0 - 1.0 years
10 - 13 Lacs
Bengaluru
Work from Office
Job Area: Interns Group, Interns Group > Interim Intern Qualcomm Overview: Qualcomm is a company of inventors that unlocked 5G ushering in an age of rapid acceleration in connectivity and new possibilities that will transform industries, create jobs, and enrich lives. But this is just the beginning. It takes inventive minds with diverse skills, backgrounds, and cultures to transform 5Gs potential into world-changing technologies and products. This is the Invention Age - and this is where you come in. General Summary: Only B.Tech, 2026 Grads As an IT intern, you will work with a team of IT professionals and engineers to develop, implement, and maintain various technologies for the organization. With a degree in computer science, engineering, or information technology, you will be able to contribute to some of the projects below. Below are examples of roles and technologies that you may work on during your internship Framework roll out and tool implementation System-level integration issues Design and integrate new features Project and program documentation Data analysis Network security Vendor management Development, Testing, application, database & infrastructure maintenance and support Project management Server/System administration Technologies OSAndroid, Linux, Windows, Chrome, Native Platforms (RIM) Microsoft office suiteSharePoint, Office365, MSFT Office, Project, etc. Packaged/Cloud (SAAS)SalesForce, Service Now, WorkDay Enterprise service management tools Cloud computing services, such as AWS, Azure Version control, operational programs, such as Git/GitHub, Splunk, Perforce or Syslog High Performance Compute, Virtualization, Firewalls, VPN technologies, Storage, Monitoring tools and proxy services FrameworksHadoop, Ruby on Rails, Grails, Angular, React Programming LanguagesJava, Python, Java Script, Objective C, Go Lang, Scala, .Nete DatabasesOracle, My SQL, PostGreSQL, Mongo DB, Elastic Search, MapR DB AnalyticsETL (Informatica/Spark/Airflow), Visualization (Tableau/Power BI), Custom Applications (Java Script) DevOpsContainers (K8S/Docker), Jenkins, Ansible, Chef, Azure DevOps Applicants Qualcomm is an equal opportunity employer. If you are an individual with a disability and need an accommodation during the application/hiring process, rest assured that Qualcomm is committed to providing an accessible process. You may e-mail myhr.support@qualcomm.com or call Qualcomm's toll-free number found here . Upon request, Qualcomm will provide reasonable accommodations to support individuals with disabilities to be able participate in the hiring process. Qualcomm is also committed to making our workplace accessible for individuals with disabilities. Qualcomm expects its employees to abide by all applicable policies and procedures, including but not limited to security and other requirements regarding protection of Company confidential information and other confidential and/or proprietary information, to the extent those requirements are permissible under applicable law. To all Staffing and Recruiting Agencies Please do not forward resumes to our jobs alias, Qualcomm employees or any other company location. Qualcomm is not responsible for any fees related to unsolicited resumes/applications. If you would like more information about this role, please contact Qualcomm Careers.
Posted 18 hours ago
6.0 - 11.0 years
18 - 33 Lacs
Noida, Pune, Delhi / NCR
Hybrid
Iris Software has been a trusted software engineering partner to several Fortune 500 companies for over three decades. We help clients realize the full potential of technology-enabled transformation by bringing together a unique blend of domain knowledge, best-of-breed technologies, and experience executing essential and critical application development engagements. Tittle - Sr Data Engineer/ Lead Data Engineer Experience - 5-12 years Location - Delhi/NCR, Pune Shift - 12:30- 9:30 pm IST 6+ years of experience in data engineering with a strong focus on AWS services. Proven expertise in: Amazon S3 for scalable data storage AWS Glue for ETL and serverless data integration using Amazon S3, DataSync, EMR, Redshiftfor data warehousing and analytics Proficiency in SQL, Python, or PySpark for data processing. Experience with data modeling, partitioning strategies, and performance optimization. Familiarity with orchestration tools like AWS Step Functions, Apache Airflow, or Glue Workflows. If Intersted, Kindly share your resume on kanika.singh@irissoftware.com Note - Notice Period max 1 month
Posted 18 hours ago
3.0 - 6.0 years
11 - 20 Lacs
Bengaluru
Work from Office
Role & responsibilities We are seeking a skilled Data Engineer to maintain robust data infrastructure and pipelines that support our operational analytics and business intelligence needs. Candidates will bridge the gap between data engineering and operations, ensuring reliable, scalable, and efficient data systems that enable data-driven decision making across the organization. Strong proficiency in Spark SQL, hands-on experience with realtime Kafka, Flink Databases: Strong knowledge of relational databases (Oracle, MySQL) and NoSQL systems Proficiency with Version Control Git, CI/CD practices and collaborative development workflow Strong operations management and stakeholder communication skills Flexibility to work cross time zone Have cross-cultural communication mindset Experience working in cross-functional teams Continuous learning mindset and adaptability to new technologies Preferred candidate profile Bachelor's degree in Computer Science, Engineering, Mathematics, or related field 3+ years of experience in data engineering, software engineering, or related role Proven experience building and maintaining production data pipelines Expertise in Hadoop ecosystem - Spark SQL, Iceberg, Hive etc. Extensive experience with Apache Kafka, Apache Flink, and other relevant streaming technologies. Orchestrating tools - Apache Airflow & UC4, Proficiency in Python, Unix or similar languages Good understanding of SQL, oracle, SQL server, Nosql or similar languages Proficiency with Version Control Git, CI/CD practices and collaborative development workflows Preferrable immeidate joiner to less than 30days np
Posted 18 hours ago
5.0 - 10.0 years
20 - 35 Lacs
Hyderabad, Chennai, Bengaluru
Work from Office
Mandatory Reqs: Golang, Python, Airflow, Temporal Key Responsibilities Design, develop, and maintain scalable backend services and workflow orchestration components using Python and GoLang . Collaborate with the Airflow and Temporal team to build and optimize data pipelines and asynchronous job execution frameworks. Implement and manage complex workflow logic using Apache Airflow and Temporal . Ensure high code quality through unit testing, integration testing, and code reviews. Work closely with cross-functional teams, including Data Engineering, DevOps, and Platform Engineering. Contribute to architectural discussions and decision-making processes to ensure scalable and maintainable systems. Write clear documentation and participate in knowledge-sharing sessions. Required Skills and Experience 57 years of professional software engineering experience . Strong hands-on programming experience with Python and GoLang . Solid understanding of concurrent and distributed systems. Hands-on experience with Apache Airflow and/or Temporal.io . Experience in designing and developing robust APIs and backend services. Familiarity with containerization tools (e.g., Docker) and CI/CD practices. Good understanding of software development lifecycle (SDLC) and Agile methodologies. Excellent problem-solving, communication, and collaboration skills. Nice to Have Experience with cloud platforms (e.g., AWS, GCP, or Azure). Exposure to microservices architecture and event-driven systems. Familiarity with monitoring and observability tools.
Posted 18 hours ago
3.0 - 8.0 years
4 - 8 Lacs
Pune
Work from Office
Required Skills and Competencies: - Experience: 3+ Years. Expertise in Python Language is MUST. SQL (should be able to write complex SQL Queries) is MUST Hands on experience in Apache Flink Streaming Or Spark Streaming MUST Hands On expertise in Apache Kafka experience is MUST Data Lake Development experience. Orchestration (Apache Airflow is preferred). Spark and Hive: Optimization of Spark/PySpark and Hive apps Trino/(AWS Athena) (Good to have) Snowflake (good to have). Data Quality (good to have). File Storage (S3 is good to have).
Posted 19 hours ago
3.0 - 6.0 years
3 - 6 Lacs
Haryāna
On-site
Job Overview We are seeking a skilled and detail-oriented HVAC Engineer with experience in cleanroom HVAC systems, including ducting, mechanical piping, and sheet metal works. The ideal candidate will assist in site execution, technical coordination, and quality assurance in line with cleanroom standards for pharmaceutical, biotech, or industrial facilities. Key Responsibilities : Support end-to-end HVAC system execution, including ducting, AHU installation, chilled water piping, and insulation. Supervise and coordinate day-to-day HVAC activities at the site in line with approved drawings and technical specifications. Review and interpret HVAC layouts, shop drawings, and coordination drawings for proper implementation. Ensure HVAC materials (ducts, dampers, diffusers, filters, etc.) meet project specifications and site requirements. Coordinate with other services (plumbing, electrical, BMS, fire-fighting) to ensure conflict-free execution. Monitor subcontractor work and labor force for compliance with timelines, quality, and safety standards. Assist in air balancing, testing & commissioning activities including HEPA filter installation and pressure validation. Conduct site surveys, measurements, and prepare daily/weekly progress reports. Maintain records for material movement, consumption, and inspection checklists. Work closely with the design and planning team to address technical issues and implement design revisions. Ensure cleanroom HVAC work complies with ISO 14644, GMP guidelines, and other regulatory standards. Required Skills & Qualifications : Diploma / B.Tech / B.E. in Mechanical Engineering or equivalent. 3–6 years of site execution experience in HVAC works, preferably in cleanroom or pharma/industrial MEP projects. Sound knowledge of duct fabrication, SMACNA standards, GI/SS materials, and cleanroom duct installation techniques. Hands-on experience with HVAC drawings, site measurement, and installation planning. Familiarity with testing procedures such as DOP/PAO testing, air balancing, and filter integrity testing. Proficient in AutoCAD, MS Excel, and basic computer applications. Good communication skills, site discipline, and teamwork. Desirable Attributes : Knowledge of cleanroom classifications and airflow management. Ability to manage vendors, material tracking, and basic troubleshooting. Familiar with safety practices and quality control procedures on site. Job Type: Full-time Pay: ₹30,000.00 - ₹50,000.00 per month Benefits: Health insurance Life insurance Provident Fund Schedule: Day shift Supplemental Pay: Overtime pay Ability to commute/relocate: Haryana, Haryana: Reliably commute or planning to relocate before starting work (Preferred) Language: english (Preferred) Work Location: In person
Posted 21 hours ago
7.0 years
12 Lacs
India
On-site
Experience- 7+ years Location- Hyderabad (preferred), Pune, Mumbai JD- We are seeking a skilled Snowflake Developer with 7+ years of experience in designing, developing, and optimizing Snowflake data solutions. The ideal candidate will have strong expertise in Snowflake SQL, ETL/ELT pipelines, and cloud data integration. This role involves building scalable data warehouses, implementing efficient data models, and ensuring high-performance data processing in Snowflake. Key Responsibilities 1. Snowflake Development & Optimization Design and develop Snowflake databases, schemas, tables, and views following best practices. Write complex SQL queries, stored procedures, and UDFs for data transformation. Optimize query performance using clustering, partitioning, and materialized views. Implement Snowflake features (Time Travel, Zero-Copy Cloning, Streams & Tasks). 2. Data Pipeline Development Build and maintain ETL/ELT pipelines using Snowflake, Snowpark, Python, or Spark. Integrate Snowflake with cloud storage (S3, Blob) and data ingestion tools (Snowpipe). Develop CDC (Change Data Capture) and real-time data processing solutions. 3. Data Modeling & Warehousing Design star schema, snowflake schema, and data vault models in Snowflake. Implement data sharing, secure views, and dynamic data masking. Ensure data quality, consistency, and governance across Snowflake environments. 4. Performance Tuning & Troubleshooting Monitor and optimize Snowflake warehouse performance (scaling, caching, resource usage). Troubleshoot data pipeline failures, latency issues, and query bottlenecks. Work with DevOps teams to automate deployments and CI/CD pipelines. 5. Collaboration & Documentation Work closely with data analysts, BI teams, and business stakeholders to deliver data solutions. Document data flows, architecture, and technical specifications. Mentor junior developers on Snowflake best practices. Required Skills & Qualifications · 8+ years in database development, data warehousing, or ETL. · 4+ years of hands-on Snowflake development experience. · Strong SQL or Python skills for data processing. · Experience with Snowflake utilities (SnowSQL, Snowsight, Snowpark). · Knowledge of cloud platforms (AWS/Azure) and data integration tools (Coalesce, Airflow, DBT). · Certifications: SnowPro Core Certification (preferred). Preferred Skills · Familiarity with data governance and metadata management. · Familiarity with DBT, Airflow, SSIS & IICS · Knowledge of CI/CD pipelines (Azure DevOps). Job Type: Full-time Pay: From ₹1,200,000.00 per year Schedule: Monday to Friday Application Question(s): How many years of total experience do you currently have? How many years of experience do you have in Snowflake development? How many years of experience do you have with DBT? What is your current CTC? What is your expected CTC? What is your notice period/ LWD? What is your current location? Are you comfortable attending 1st round face to face on 2nd Aug (Saturday) in Hyderabad, Mumbai or Pune office?
Posted 21 hours ago
5.0 - 9.0 years
7 - 8 Lacs
Hyderābād
On-site
Join Amgen’s Mission of Serving Patients At Amgen, if you feel like you’re part of something bigger, it’s because you are. Our shared mission—to serve patients living with serious illnesses—drives all that we do. Since 1980, we’ve helped pioneer the world of biotech in our fight against the world’s toughest diseases. With our focus on four therapeutic areas –Oncology, Inflammation, General Medicine, and Rare Disease– we reach millions of patients each year. As a member of the Amgen team, you’ll help make a lasting impact on the lives of patients as we research, manufacture, and deliver innovative medicines to help people live longer, fuller happier lives. Our award-winning culture is collaborative, innovative, and science based. If you have a passion for challenges and the opportunities that lay within them, you’ll thrive as part of the Amgen team. Join us and transform the lives of patients while transforming your career. Data Science Engineer What you will do Let’s do this. Let’s change the world. In this vital role We are seeking a highly skilled Machine Learning Engineer with a strong MLOps background to join our team. You will play a pivotal role in building and scaling our machine learning models from development to production. Your expertise in both machine learning and operations will be essential in creating efficient and reliable ML pipelines. Roles & Responsibilities: Collaborate with data scientists to develop, train, and evaluate machine learning models. Build and maintain MLOps pipelines, including data ingestion, feature engineering, model training, deployment, and monitoring. Leverage cloud platforms (AWS, GCP, Azure) for ML model development, training, and deployment. Implement DevOps/MLOps best practices to automate ML workflows and improve efficiency. Develop and implement monitoring systems to track model performance and identify issues. Conduct A/B testing and experimentation to optimize model performance. Work closely with data scientists, engineers, and product teams to deliver ML solutions. Stay updated with the latest trends and advancements What we expect of you We are all different, yet we all use our unique contributions to serve patients. Basic Qualifications: Master's degree / Bachelor's degree and 5 to 9 years [Job Code’s Discipline and/or Sub-Discipline] Functional Skills: Must-Have Skills: Solid foundation in machine learning algorithms and techniques Experience in MLOps practices and tools (e.g., MLflow, Kubeflow, Airflow); Experience in DevOps tools (e.g., Docker, Kubernetes, CI/CD) Proficiency in Python and relevant ML libraries (e.g., TensorFlow, PyTorch, Scikit-learn) Outstanding analytical and problem-solving skills; Ability to learn quickly; Good communication and interpersonal skills Good-to-Have Skills: Experience with big data technologies (e.g., Spark, Hadoop), and performance tuning in query and data processing Experience with data engineering and pipeline development Experience in statistical techniques and hypothesis testing, experience with regression analysis, clustering and classification Knowledge of NLP techniques for text analysis and sentiment analysis Experience in analyzing time-series data for forecasting and trend analysis What you can expect of us As we work to develop treatments that take care of others, we also work to care for your professional and personal growth and well-being. From our competitive benefits to our collaborative culture, we’ll support your journey every step of the way. In addition to the base salary, Amgen offers competitive and comprehensive Total Rewards Plans that are aligned with local industry standards. Apply now and make a lasting impact with the Amgen team. careers.amgen.com As an organization dedicated to improving the quality of life for people around the world, Amgen fosters an inclusive environment of diverse, ethical, committed and highly accomplished people who respect each other and live the Amgen values to continue advancing science to serve patients. Together, we compete in the fight against serious disease. Amgen is an Equal Opportunity employer and will consider all qualified applicants for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, disability status, or any other basis protected by applicable law. We will ensure that individuals with disabilities are provided reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request accommodation.
Posted 21 hours ago
2.0 - 6.0 years
7 - 8 Lacs
Hyderābād
On-site
Join Amgen’s Mission of Serving Patients At Amgen, if you feel like you’re part of something bigger, it’s because you are. Our shared mission—to serve patients living with serious illnesses—drives all that we do. Since 1980, we’ve helped pioneer the world of biotech in our fight against the world’s toughest diseases. With our focus on four therapeutic areas –Oncology, Inflammation, General Medicine, and Rare Disease– we reach millions of patients each year. As a member of the Amgen team, you’ll help make a lasting impact on the lives of patients as we research, manufacture, and deliver innovative medicines to help people live longer, fuller happier lives. Our award-winning culture is collaborative, innovative, and science based. If you have a passion for challenges and the opportunities that lay within them, you’ll thrive as part of the Amgen team. Join us and transform the lives of patients while transforming your career. Associate Data Engineer What you will do Let’s do this. Let’s change the world. In this vital role we seek a skilled Data Engineer to build and optimize our data infrastructure. As a key contributor, you will collaborate closely with cross-functional teams to design and implement robust data pipelines that efficiently extract, transform, and load data into our AWS-based data lake and data warehouse. Your expertise will be instrumental in empowering data-driven decision making through advanced analytics and predictive modeling. Roles & Responsibilities: Building and optimizing data pipelines, data warehouses, and data lakes on the AWS and Databricks platforms. Managing and maintaining the AWS and Databricks environments. Ensuring data integrity, accuracy, and consistency through rigorous quality checks and monitoring. Maintain system uptime and optimal performance Working closely with cross-functional teams to understand business requirements and translate them into technical solutions. Exploring and implementing new tools and technologies to enhance ETL platform performance. What we expect of you We are all different, yet we all use our unique contributions to serve patients. Basic Qualifications: Bachelor’s degree and 2 to 6 years. Functional Skills: Must-Have Skills: Proficient in SQL for extracting, transforming, and analyzing complex datasets from both relational and columnar data stores. Proven ability to optimize query performance on big data platforms. Proficient in leveraging Python, PySpark, and Airflow to build scalable and efficient data ingestion, transformation, and loading processes. Ability to learn new technologies quickly. Strong problem-solving and analytical skills. Excellent communication and teamwork skills. Good-to-Have Skills: Experienced with SQL/NOSQL database, vector database for large language models Experienced with data modeling and performance tuning for both OLAP and OLTP databases Experienced with Apache Spark, Apache Airflow Experienced with software engineering best-practices, including but not limited to version control (Git, Subversion, etc.), CI/CD (Jenkins, Maven etc.), automated unit testing, and Dev Ops Experienced with AWS, GCP or Azure cloud services What you can expect of us As we work to develop treatments that take care of others, we also work to care for your professional and personal growth and well-being. From our competitive benefits to our collaborative culture, we’ll support your journey every step of the way. In addition to the base salary, Amgen offers competitive and comprehensive Total Rewards Plans that are aligned with local industry standards. Apply now and make a lasting impact with the Amgen team. careers.amgen.com As an organization dedicated to improving the quality of life for people around the world, Amgen fosters an inclusive environment of diverse, ethical, committed and highly accomplished people who respect each other and live the Amgen values to continue advancing science to serve patients. Together, we compete in the fight against serious disease. Amgen is an Equal Opportunity employer and will consider all qualified applicants for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, disability status, or any other basis protected by applicable law. We will ensure that individuals with disabilities are provided reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request accommodation.
Posted 21 hours ago
1.5 - 2.0 years
0 Lacs
India
On-site
Qualification: Education: Bachelor’s degree in any field. Experience: Minimum 1.5-2 years of experience in data engineering support or a related role, with hands-on exposure to AWS. Technical Skills: Strong understanding of AWS services, including but not limited to S3, EC2, CloudWatch, and IAM. Proficiency in SQL with the ability to write, optimize, and debug queries for data analysis and issue resolution. Hands-on experience with Python for scripting and automation; familiarity with Shell scripting is a plus. Good understanding of ETL processes and data pipelines. Exposure to data warehousing concepts; experience with Amazon Redshift or similar platforms preferred. Working knowledge of orchestration tools, especially Apache Airflow – including monitoring and basic troubleshooting. Soft Skills: Strong communication and interpersonal skills for effective collaboration with cross-functional teams and multi-cultural teams. Problem-solving attitude with an eagerness to learn and adapt quickly. Willingness to work in a 24x7 support environment on a 6-day working schedule, with rotational shifts as required. Language Requirements: Must be able to read and write in English proficiently.
Posted 21 hours ago
80.0 years
0 Lacs
Bengaluru
On-site
Job Description For more than 80 years, Kaplan has been a trailblazer in education and professional advancement. We are a global company at the intersection of education and technology, focused on collaboration, innovation, and creativity to deliver a best in class educational experience and make Kaplan a great place to work. Our offices in India opened in Bengaluru in 2018. Since then, our team has fueled growth and innovation across the organization, impacting students worldwide. We are eager to grow and expand with skilled professionals like you who use their talent to build solutions, enable effective learning, and improve students’ lives. The future of education is here and we are eager to work alongside those who want to make a positive impact and inspire change in the world around them. The Associate Data Engineer at Kaplan North America (KNA) within the Analytics division will work with world class psychometricians, data scientists and business analysts to forever change the face of education. This role is a hands-on technical expert who will help implement an Enterprise Data Warehouse powered by AWS RA3 as a key feature of our Lake House architecture. The perfect candidate possesses strong technical knowledge in data engineering, data observability, Infrastructure automation, data ops methodology, systems architecture, and development. You should be expert at designing, implementing, and operating stable, scalable, low cost solutions to flow data from production systems into the data warehouse and into end-user facing applications. You should be able to work with business customers in a fast-paced environment understanding the business requirements and implementing data & reporting solutions. Above all you should be passionate about working with big data and someone who loves to bring datasets together to answer business questions and drive change Responsibilities You design, implement, and deploy data solutions. You solve difficult problems generating positive feedback. Build different types of data warehousing layers based on specific use cases Lead the design, implementation, and successful delivery of large-scale, critical, or difficult data solutions involving a significant amount of work Build scalable data infrastructure and understand distributed systems concepts from a data storage and compute perspective Utilize expertise in SQL and have a strong understanding of ETL and data modeling Ensure the accuracy and availability of data to customers and understand how technical decisions can impact their business’s analytics and reporting Be proficient in at least one scripting/programming language to handle large volume data processing. 30-day notification period preferred Requirements: In-depth knowledge of the AWS stack (RA3, Redshift, Lambda, Glue, SnS). Experience in data modeling, ETL development and data warehousing. Effective troubleshooting and problem-solving skills Strong customer focus, ownership, urgency and drive. Excellent verbal and written communication skills and the ability to work well in a team Preferred Qualification: Proficiency with Airflow, Tableau & SSRS #LI-NJ1 Location Bangalore, KA, India Additional Locations Employee Type Employee Job Functional Area Systems Administration/Engineering Business Unit 00091 Kaplan Higher ED At Kaplan, we recognize the importance of attracting and retaining top talent to drive our success in a competitive market. Our salary structure and compensation philosophy reflect the value we place on the experience, education, and skills that our employees bring to the organization, taking into consideration labor market trends and total rewards. All positions with Kaplan are paid at least $15 per hour or $31,200 per year for full-time positions. Additionally, certain positions are bonus or commission-eligible. And we have a comprehensive benefits package, learn more about our benefits here . Diversity & Inclusion Statement: Kaplan is committed to cultivating an inclusive workplace that values diversity, promotes equity, and integrates inclusivity into all aspects of our operations. We are an equal opportunity employer and all qualified applicants will receive consideration for employment regardless of age, race, creed, color, national origin, ancestry, marital status, sexual orientation, gender identity or expression, disability, veteran status, nationality, or sex. We believe that diversity strengthens our organization, fuels innovation, and improves our ability to serve our students, customers, and communities. Learn more about our culture here . Kaplan considers qualified applicants for employment even if applicants have an arrest or conviction in their background check records. Kaplan complies with related background check regulations, including but not limited to, the Los Angeles County Fair Chance Ordinance for Employers and the California Fair Chance Act. There are various positions where certain convictions may disqualify applicants, such as those positions requiring interaction with minors, financial records, or other sensitive and/or confidential information. Kaplan is a drug-free workplace and complies with applicable laws.
Posted 21 hours ago
0 years
4 - 9 Lacs
Bengaluru
Remote
Your opportunity Do you love the transformative impact data can have on a business? Are you motivated to push for results and overcome all obstacles? Then we have a role for you. What you'll do Lead the building of scalable, fault tolerant pipelines with built in data quality checks that transform, load and curate data from various internal and external systems Provide leadership to cross-functional initiatives and projects. Influence architecture design and decisions. Build cross-functional relationships with Data Scientists, Product Managers and Software Engineers to understand data needs and deliver on those needs. Improve engineering processes and cross-team collaboration. Ruthlessly prioritize work to align with company priorities. Provide thought leadership to grow and evolve DE function and implementation of SDLC best practices in building internal-facing data products by staying up-to-date with industry trends, emerging technologies, and best practices in data engineering This role requires Experience in BI and Data Warehousing. Strong experience with dbt, Airflow and snowflake Experience with Apache Iceberg tables Experience and knowledge of building data-lakes in AWS (i.e. Spark/Glue, Athena), including data modeling, data quality best practices, and self-service tooling. Experience mentoring data professionals from junior to senior levels Demonstrated success leading cross functional initiatives Passionate about data quality, code quality, SLAs and continuous improvement Deep understanding of data system architecture Deep understanding of ETL/ELT patterns Development experience in at least one object-oriented language (Python,R,Scala, etc.). Comfortable with SQL and related tooling Bonus points if you have Experience with Observability Please note that visa sponsorship is not available for this position. Fostering a diverse, welcoming and inclusive environment is important to us. We work hard to make everyone feel comfortable bringing their best, most authentic selves to work every day. We celebrate our talented Relics’ different backgrounds and abilities, and recognize the different paths they took to reach us – including nontraditional ones. Their experiences and perspectives inspire us to make our products and company the best they can be. We’re looking for people who feel connected to our mission and values, not just candidates who check off all the boxes. If you require a reasonable accommodation to complete any part of the application or recruiting process, please reach out to resume@newrelic.com. We believe in empowering all Relics to achieve professional and business success through a flexible workforce model. This model allows us to work in a variety of workplaces that best support our success, including fully office-based, fully remote, or hybrid. Our hiring process In compliance with applicable law, all persons hired will be required to verify identity and eligibility to work and to complete employment eligibility verification. Note: Our stewardship of the data of thousands of customers’ means that a criminal background check is required to join New Relic. We will consider qualified applicants with arrest and conviction records based on individual circumstances and in accordance with applicable law including, but not limited to, the San Francisco Fair Chance Ordinance. Headhunters and recruitment agencies may not submit resumes/CVs through this website or directly to managers. New Relic does not accept unsolicited headhunter and agency resumes, and will not pay fees to any third-party agency or company that does not have a signed agreement with New Relic. Candidates are evaluated based on qualifications, regardless of race, religion, ethnicity, national origin, sex, sexual orientation, gender expression or identity, age, disability, neurodiversity, veteran or marital status, political viewpoint, or other legally protected characteristics. Review our Applicant Privacy Notice at https://newrelic.com/termsandconditions/applicant-privacy-policy
Posted 21 hours ago
9.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Who We Are Wayfair is moving the world so that anyone can live in a home they love – a journey enabled by more than 3,000 Wayfair engineers and a data-centric culture. Wayfair’s Advertising business is rapidly expanding, adding hundreds of millions of dollars in profits to Wayfair. We are building Sponsored Products, Display & Video Ad offerings that cater to a variety of Advertiser goals while showing highly relevant and engaging Ads to millions of customers. We are evolving our Ads Platform to empower advertisers across all sophistication levels to grow their business on Wayfair at a strong, positive ROI and are leveraging state of the art Machine Learning techniques. The Advertising Optimization & Automation Science team is central to this effort. We leverage machine learning and generative AI to streamline campaign workflows, delivering impactful recommendations on budget allocation, target Return on Ad Spend (tROAS), and SKU selection. Additionally, we are developing intelligent systems for creative optimization and exploring agentic frameworks to further simplify and enhance advertiser interactions. We are looking for an experienced Senior Machine Learning Scientist to join the Advertising Optimization & Automation Science team. In this role, you will be responsible for building intelligent, ML-powered systems that drive personalized recommendations and campaign automation within Wayfair’s advertising platform. You will work closely with other scientists, as well as members of our internal Product and Engineering teams, to apply your ML expertise to define and deliver 0-to-1 capabilities that unlock substantial commercial value and directly enhance advertiser outcomes. What You’ll do Design and build intelligent budget, tROAS, and SKU recommendations, and simulation-driven decisioning that extends beyond the current advertising platform capabilities. Lead the next phase of GenAI-powered creative optimization and automation to drive significant incremental ad revenue and improve supplier outcomes. Raise technical standards across the team by promoting best practices in ML system design and development. Partner cross-functionally with Product, Engineering, and Sales to deliver scalable ML solutions that improve supplier campaign performance. Ensure systems are designed for reuse, extensibility, and long-term impact across multiple advertising workflows. Research and apply best practices in advertising science, GenAI applications in creative personalization, and auction modeling. Keep Wayfair at the forefront of innovation in supplier marketing optimization. Collaborate with Engineering teams (AdTech, ML Platform, Campaign Management) to build and scale the infrastructure needed for automated, intelligent advertising decisioning. We Are a Match Because You Have : Bachelor's or Master’s degree in Computer Science, Mathematics, Statistics, or related field. 9+ years of experience in building large scale machine learning algorithms. 4+ years of experience working in an architect or technical leadership position. Strong theoretical understanding of statistical models such as regression, clustering and ML algorithms such as decision trees, neural networks, transformers and NLP techniques. Proficiency in programming languages such as Python and relevant ML libraries (e.g., TensorFlow, PyTorch) to develop production-grade products. Strategic thinker with a customer-centric mindset and a desire for creative problem solving, looking to make a big impact in a growing organization. Demonstrated success influencing senior level stakeholders on strategic direction based on recommendations backed by in-depth analysis; Excellent written and verbal communication. Ability to partner cross-functionally to own and shape technical roadmaps Intellectual curiosity and a desire to always be learning! Nice to have Experience with GCP, Airflow, and containerization (Docker). Experience building scalable data processing pipelines with big data tools such as Hadoop, Hive, SQL, Spark, etc. Familiarity with Generative AI and agentic workflows. Experience in Bayesian Learning, Multi-armed Bandits, or Reinforcement Learning. About Wayfair Inc. Wayfair is one of the world’s largest online destinations for the home. Through our commitment to industry-leading technology and creative problem-solving, we are confident that Wayfair will be home to the most rewarding work of your career. If you’re looking for rapid growth, constant learning, and dynamic challenges, then you’ll find that amazing career opportunities are knocking. No matter who you are, Wayfair is a place you can call home. We’re a community of innovators, risk-takers, and trailblazers who celebrate our differences, and know that our unique perspectives make us stronger, smarter, and well-positioned for success. We value and rely on the collective voices of our employees, customers, community, and suppliers to help guide us as we build a better Wayfair – and world – for all. Every voice, every perspective matters. That’s why we’re proud to be an equal opportunity employer. We do not discriminate on the basis of race, color, ethnicity, ancestry, religion, sex, national origin, sexual orientation, age, citizenship status, marital status, disability, gender identity, gender expression, veteran status, genetic information, or any other legally protected characteristic. We are interested in retaining your data for a period of 12 months to consider you for suitable positions within Wayfair. Your personal data is processed in accordance with our Candidate Privacy Notice (which can found here: https://www.wayfair.com/careers/privacy). If you have any questions regarding our processing of your personal data, please contact us at dataprotectionofficer@wayfair.com. If you would rather not have us retain your data please contact us anytime at dataprotectionofficer@wayfair.com.
Posted 21 hours ago
5.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Overview: TekWissen is a global workforce management provider throughout India and many other countries in the world. The below clientis a global company with shared ideals and a deep sense of family. From our earliest days as a pioneer of modern transportation, we have sought to make the world a better place – one that benefits lives, communities and the planet Job Title: Software Engineer Senior Location: Chennai Work Type: Hybrid Position Description: As part of the client's DP&E Platform Observability team, you'll help build a top-tier monitoring platform focused on latency, traffic, errors, and saturation. You'll design, develop, and maintain a scalable, reliable platform, improving MTTR/MTTX, creating dashboards, and optimizing costs. Experience with large systems, monitoring tools (Prometheus, Grafana, etc.), and cloud platforms (AWS, Azure, GCP) is ideal. The focus is a centralized observability source for data-driven decisions and faster incident response. Skills Required: Spring Boot, Angular, Cloud Computing Skills Preferred: Google Cloud Platform - Biq Query, Data Flow, Dataproc, Data Fusion, TERRAFORM, Tekton,Cloud SQL, AIRFLOW, POSTGRES, Airflow PySpark, Python, API Experience Required: 5+ years of overall experience with proficiency in Java, angular or any javascript technology with experience in designing and deploying cloud-based data pipelines and microservices using GCP tools like BigQuery, Dataflow, and Dataproc. Ability to leverage best in-class data platform technologies (Apache Beam, Kafka,...) to deliver platform features, and design & orchestrate platform services to deliver data platform capabilities. Service-Oriented Architecture and Microservices: Strong understanding of SOA, microservices, and their application within a cloud data platform context. Develop robust, scalable services using Java Spring Boot, Python, Angular, and GCP technologies. Full-Stack Development: Knowledge of front-end and back-end technologies, enabling collaboration on data access and visualization layers (e.g., React, Node.js). Design and develop RESTful APIs for seamless integration across platform services. Implement robust unit and functional tests to maintain high standards of test coverage and quality. Database Management: Experience with relational (e.g., PostgreSQL, MySQL) and NoSQL databases, as well as columnar databases like BigQuery. Data Governance and Security: Understanding of data governance frameworks and implementing RBAC, encryption, and data masking in cloud environments. CI/CD and Automation: Familiarity with CI/CD pipelines, Infrastructure as Code (IaC) tools like Terraform, and automation frameworks. Manage code changes with GitHub and troubleshoot and resolve application defects efficiently. Ensure adherence to SDLC best practices, independently managing feature design, coding, testing, and production releases. Problem-Solving: Strong analytical skills with the ability to troubleshoot complex data platform and microservices issues. Experience Preferred: GCP Data Engineer, GCP Professional Cloud Education Required: Bachelor's Degree TekWissen® Group is an equal opportunity employer supporting workforce diversity.
Posted 21 hours ago
4.0 years
7 - 11 Lacs
India
On-site
Experience: 4+ years Department: Data & Analytics We’re seeking a hands-on Senior Data Analyst & Scientist to drive insights, build predictive models, and develop scalable data pipelines. This hybrid role blends data science and analytics engineering to support decision-making across teams. Responsibilities: - Analyze data to identify trends and business insights - Build and deploy ML models (forecasting, segmentation, recommendations) - Design/manage ETL/ELT pipelines - Develop dashboards and reports (Tableau, Power BI, etc.) - Collaborate with product, engineering, and business teams Requirements: - 4+ years of experience in data science, analytics, or data engineering - Strong in SQL and Python (pandas, NumPy); experience with ML tools like scikit-learn - Familiar with Airflow, DBT, or similar orchestration tools - Experience with cloud platforms like BigQuery, Snowflake, or Redshift - Proficient in BI tools such as Tableau, Power BI, or Looker - Excellent communication and problem-solving skills Nice to Have: - ML model deployment experience - Cloud experience (AWS, GCP, or Azure) - Background in SaaS or product analytics Job Type: Full-time Pay: ₹60,000.00 - ₹95,000.00 per month Benefits: Flexible schedule Paid sick time Paid time off Language: English (Required) Work Location: In person
Posted 21 hours ago
0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Job Description Responsibilities: Infrastructure Provisioning and Management: Set up and configure cloud projects/Data Integration Tools and accounts using Google Cloud Platform (GCP). Configuring Cloud and Data Integration Solutions: Configure Data integration tools like Astronomer/GKE/App Integration/Airflow and Composer Data Pipeline Orchestration: Set up and configure data pipeline orchestration tools such as Astronomer, Composer, and Airflow. Manage and schedule workflows using Directed Acyclic Graphs (DAGs) to ensure efficient data processing. Monitoring and Incident Response: Maintain the availability of systems and services in production by setting service-level objectives (SLOs), service-level agreements (SLAs), and service-level indicators (SLIs). Set up monitoring and alerting systems for cloud services, data pipelines, and AI solutions. Respond to incidents, perform root cause analysis, and collaborate with teams to improve incident response practices. Security and Compliance: Ensure access and identity management in Google Cloud and data pipeline projects. Implement security measures to protect cloud-based data, services, and AI/ML workloads. Comply with global privacy laws and organizational policies. GenAI Solution Development & Delivery: Lead the development, testing, and deployment of GenAI solutions. Collaboration and Communication: Collaborate with cross-functional teams to design and implement innovative security solutions that align with industry best practices and regulatory requirements. Communicate effectively with management and users about the state and performance of cloud services, data pipelines, and AI solutions. Continuous Improvement: Automate infrastructure provisioning with tools like Terraform and Tekton. Enhance visibility and monitoring capabilities for cloud services, data pipelines, and GenAI solutions. Implement improvements based on feedback and incident reviews OnCall Support: Able to support Oncall during weekends Responsibilities Responsibilities: Infrastructure Provisioning and Management: Set up and configure cloud projects/Data Integration Tools and accounts using Google Cloud Platform (GCP). Configuring Cloud and Data Integration Solutions: Configure Data integration tools like Astronomer/GKE/App Integration/Airflow and Composer Data Pipeline Orchestration: Set up and configure data pipeline orchestration tools such as Astronomer, Composer, and Airflow. Manage and schedule workflows using Directed Acyclic Graphs (DAGs) to ensure efficient data processing. Monitoring and Incident Response: Maintain the availability of systems and services in production by setting service-level objectives (SLOs), service-level agreements (SLAs), and service-level indicators (SLIs). Set up monitoring and alerting systems for cloud services, data pipelines, and AI solutions. Respond to incidents, perform root cause analysis, and collaborate with teams to improve incident response practices. Security and Compliance: Ensure access and identity management in Google Cloud and data pipeline projects. Implement security measures to protect cloud-based data, services, and AI/ML workloads. Comply with global privacy laws and organizational policies. GenAI Solution Development & Delivery: Lead the development, testing, and deployment of GenAI solutions. Collaboration and Communication: Collaborate with cross-functional teams to design and implement innovative security solutions that align with industry best practices and regulatory requirements. Communicate effectively with management and users about the state and performance of cloud services, data pipelines, and AI solutions. Continuous Improvement: Automate infrastructure provisioning with tools like Terraform and Tekton. Enhance visibility and monitoring capabilities for cloud services, data pipelines, and GenAI solutions. Implement improvements based on feedback and incident reviews OnCall Support: Able to support Oncall during weekends Qualifications QUALIFICATIONS Education: Bachelor's degree in Computer Science, Information Technology, or a related field. Experience: Proven experience in cloud engineering, data pipeline orchestration, and AI solution development, specifically with Google Cloud Platform (GCP) and tools like Astronomer, Composer, Airflow. Programming Language : Python, Terraform, Google cloud, Tekton, Gen AI and LLM Models. Skills: Proficiency in GCP services and data pipeline orchestration tools. Strong understanding of IAM roles, policies, and security best practices. Experience with infrastructure as code tools like Terraform. Excellent problem-solving and troubleshooting skills. Strong communication and collaboration skills.
Posted 21 hours ago
6.0 years
14 - 24 Lacs
India
On-site
6+ years of experience as a Data Engineer. Strong proficiency in SQL. Hands-on experience with modern cloud data warehousing solutions (Snowflake, Big Query, Redshift) Expertise in ETL/ELT processes, batch, and streaming data processing. Proven ability to troubleshoot data issues and propose effective solutions. Knowledge of AWS services (S3, DMS, Glue, Athena). Familiarity with DBT for data transformation and modeling. Must be fluent in English communication. Desired Experience 3 years of Experience with additional AWS services (EC2, ECS, EKS, VPC, IAM). Knowledge of Infrastructure as Code (IaC) tools like Terraform and Terragrunt. Proficiency in Python for data engineering tasks. Experience with orchestration tools like Dagster, Airflow, or AWS Step Functions. Familiarity with pub-sub, queuing, and streaming frameworks (AWS Kinesis, Kafka, SQS, SNS). Experience with CI/CD pipelines and automation for data processes. Skills: python,glue,automation,cloud data warehousing (snowflake, big query, redshift),pub-sub frameworks (aws kinesis, kafka, sqs, sns),aws (s3, dms, glue, athena),etl/elt processes,orchestration tools (dagster, airflow, aws step functions),batch processing,ci/cd pipelines,data,ci,streaming data processing,aws,infrastructure as code (iac),etl,sql,dbt,pyspark,lambda,terragrunt,aws services (ec2, ecs, eks, vpc, iam),cd,terraform,s3
Posted 21 hours ago
0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Hi All, We are hiring for Data Engineers, kindly refer the below skillsets: Mandatory Skills: GCP Cloud (especially BigQuery and DataProc) Big Data technologies Hadoop Hive Python / PySpark Airflow and DAG orchestration Preferred Skills: Experience with visualization tools such as Tableau or Power BI Familiarity with Jethro is a plus
Posted 21 hours ago
2.0 - 3.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Job Title: Senior Data Analyst - Project Management Location: Bengaluru, Karnataka, India Experience : 2-3 Years About the Company & Role : We are one of India’s premier integrated political consulting firms specializing in building data-driven 360-degree election campaigns. We help our clients with strategic advice and implementation which brings together data-backed insights and in-depth ground intelligence into a holistic electoral campaign. We are passionate about our democracy and the politics that shape the world around us. We draw on some of the sharpest minds from distinguished institutions and diverse professional backgrounds to help us achieve our goal. The team brings in 7 years of experience in building electoral strategies that spark conversations, effect change, and help shape electoral and legislative ecosystems in our country. Job Summary: We are seeking a highly motivated and skilled Data Analyst to join our dynamic Project Management Office (PMO). This critical role involves developing, maintaining, and enhancing insightful PMO dashboards while also designing, implementing, and managing automated data pipelines. The ideal candidate will possess a strong blend of data analysis, visualization, and technical automation skills to ensure the PMO has timely, accurate data for tracking project performance, identifying trends, and making data-driven decisions. Key Responsibilities: PMO Dashboard Development & Management: Design, build, and maintain interactive dashboards using BI tools (e.g., Looker Studio, Tableau) to visualize key project metrics, resource allocation, timelines, risks, and overall PMO performance KPIs. Collaborate with PMO leadership and project managers to gather reporting requirements and translate them into effective data models and visualizations. Ensure data accuracy, consistency, and reliability within dashboards and reports. Perform data analysis to identify trends, potential issues, and areas for process improvement within project execution. Generate regular performance reports and support ad-hoc data requests from stakeholders. Data Management: Design, develop, implement, and maintain robust, automated data pipelines for Extract, Transform, Load (ETL/ELT) processes. Automate data collection from various sources including project management software, spreadsheets, databases, and APIs (e.g., Slack API). Load and process data efficiently into our data warehouse environment (e.g., Google BigQuery). Write and optimize SQL queries for data manipulation, transformation, and aggregation. Implement data quality checks, error handling, and monitoring for automated pipelines. Troubleshoot and resolve issues related to data extraction, transformation, loading, and pipeline failures. Document data sources, data models, pipeline architecture, and automation workflows. Required Qualifications & Skills: Bachelor's degree in Computer Science, Data Science, Statistics, Information Systems, Engineering, or a related quantitative field. Proven experience (approx. 2-3 years) in data analysis, business intelligence, data engineering, or a similar role. Strong proficiency in SQL for complex querying, data manipulation, and performance tuning. Hands-on experience building and maintaining dashboards using Tableau. Demonstrable experience in designing and automating data pipelines using scripting languages (Python preferred) and/or ETL/ELT tools. Solid understanding of data warehousing concepts, ETL/ELT principles, and data modeling. Excellent analytical, problem-solving, and critical thinking skills. Strong attention to detail and commitment to data accuracy. Good communication and collaboration skills, with the ability to interact with technical and non-technical stakeholders. Ability to work independently and manage priorities effectively. Preferred Qualifications & Skills: Experience working directly within a Project Management Office (PMO) or supporting project management functions. Familiarity with project management tools (e.g., Jira, Asana, MS Project) and concepts (Agile, Waterfall). Experience with cloud platforms, particularly Google Cloud Platform (GCP) and BigQuery. Experience with workflow orchestration tools (e.g., Airflow, Cloud Composer, Cloud Functions). Experience integrating data via APIs from various business systems. Basic understanding of data governance and data quality management practices. If you are a driven professional seeking a high-impact challenge and interested in joining a team of like-minded, motivated individuals who think strategically, act decisively, and get things done, email us at openings@varaheanalytics.com
Posted 21 hours ago
5.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Job Title: Senior Python Developer – AI/ML Document Automation Location: Hyderabad Work Mode: Hybrid Experience: 5+ Years Job Summary: We are looking for a highly skilled Senior Python Developer with deep expertise in AI/ML and document automation . The ideal candidate will lead the design and development of intelligent systems for extracting and processing structured and unstructured data from documents such as invoices, receipts, contracts, and PDFs. This role involves both hands-on coding and architectural contributions to scalable automation platforms. Roles and Responsibilities: Design and develop modular Python applications for document parsing and intelligent automation. Build and optimize ML/NLP pipelines for tasks like Named Entity Recognition (NER), classification, and layout-aware data extraction. Integrate rule-based and AI-driven techniques (e.g., regex, spaCy, PyMuPDF, Tesseract) to handle diverse document formats. Develop and deploy models via REST APIs using FastAPI or Flask, and containerize with Docker. Collaborate with cross-functional teams to define automation goals and data strategies. Conduct code reviews, mentor junior developers, and uphold best coding practices. Monitor model performance and implement feedback mechanisms for continuous improvement. Maintain thorough documentation of workflows, metrics, and architectural decisions. Mandatory Skills: Expert in Python (OOP, asynchronous programming, modular design). Strong foundation in machine learning algorithms and natural language processing techniques. Hands-on experience with Scikit-learn, TensorFlow, PyTorch, and Hugging Face Transformers. Proficient in developing REST APIs using FastAPI or Flask. Experience in PDF/text extraction using PyMuPDF, Tesseract, or similar tools. Skilled in regex-based extraction and rule-based NER. Familiar with Git, Docker, and any major cloud platform (AWS, GCP, or Azure). Exposure to MLOps tools such as MLflow, Airflow, or LangChain.
Posted 21 hours ago
0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Position Overview: We are seeking a talented Data Engineer with expertise in Apache Spark, Python / Java and distributed systems. The ideal candidate will be skilled in creating and managing data pipelines using AWS. Key Responsibilities: Design, develop, and implement data pipelines for ingesting, transforming, and loading data at scale. Utilise Apache Spark for data processing and analysis. Utilise AWS services (S3, Redshift, EMR, Glue) to build and manage efficient data pipelines. Optimise data pipelines for performance and scalability, considering factors like partitioning, bucketing, and caching. Write efficient and maintainable Python code. Implement and manage distributed systems for data processing. Collaborate with cross-functional teams to understand data requirements and deliver optimal solutions. Ensure data quality and integrity throughout the data lifecycle. Qualifications: Proven experience with Apache Spark and Python / Java. Strong knowledge of distributed systems. Proficiency in creating data pipelines with AWS. Excellent problem-solving and analytical skills. Ability to work independently and as part of a team. Bachelor's degree in Computer Science, Information Technology, or a related field (or equivalent experience). Proven experience in designing and developing data pipelines using Apache Spark and Python. Experience with distributed systems concepts (Hadoop, YARN) is a plus. In-depth knowledge of AWS cloud services for data engineering (S3, Redshift, EMR, Glue). Familiarity with data warehousing concepts (data modeling, ETL) is preferred. Strong programming skills in Python (Pandas, NumPy, Scikit-learn are a plus). Experience with data pipeline orchestration tools (Airflow, Luigi) is a plus. Excellent problem-solving and analytical skills. Strong communication and collaboration skills. Preferred Qualifications: Experience with additional AWS services (e.g., AWS Glue, AWS Lambda, Amazon Redshift). Familiarity with data warehousing and ETL processes. Knowledge of data governance and best practices. Have a good understanding of the oops concept. Hands-on experience with SQL database design Experience with Python, SQL, and data visualization/exploration tools
Posted 21 hours ago
7.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Bangalore/Gurugram/Hyderabad YOE - 7+ years We are seeking a talented Data Engineer with strong expertise in Databricks, specifically in Unity Catalog, PySpark, and SQL, to join our data team. You’ll play a key role in building secure, scalable data pipelines and implementing robust data governance strategies using Unity Catalog. Key Responsibilities: Design and implement ETL/ELT pipelines using Databricks and PySpark. Work with Unity Catalog to manage data governance, access controls, lineage, and auditing across data assets. Develop high-performance SQL queries and optimize Spark jobs. Collaborate with data scientists, analysts, and business stakeholders to understand data needs. Ensure data quality and compliance across all stages of the data lifecycle. Implement best practices for data security and lineage within the Databricks ecosystem. Participate in CI/CD, version control, and testing practices for data pipelines Required Skills: Proven experience with Databricks and Unity Catalog (data permissions, lineage, audits). Strong hands-on skills with PySpark and Spark SQL. Solid experience writing and optimizing complex SQL queries. Familiarity with Delta Lake, data lakehouse architecture, and data partitioning. Experience with cloud platforms like Azure or AWS. Understanding of data governance, RBAC, and data security standards. Preferred Qualifications: Databricks Certified Data Engineer Associate or Professional. Experience with tools like Airflow, Git, Azure Data Factory, or dbt. Exposure to streaming data and real-time processing. Knowledge of DevOps practices for data engineering.
Posted 21 hours ago
4.0 - 9.0 years
4 - 8 Lacs
Pune
Work from Office
Experience: 4+ Years. Expertise in Python Language is MUST. SQL (should be able to write complex SQL Queries) is MUST Hands on experience in Apache Flink Streaming Or Spark Streaming MUST Hands On expertise in Apache Kafka experience is MUST Data Lake Development experience. Orchestration (Apache Airflow is preferred). Spark and Hive: Optimization of Spark/PySpark and Hive apps Trino/(AWS Athena) (Good to have) Snowflake (good to have). Data Quality (good to have). File Storage (S3 is good to have) Our Offering:- Global cutting-edge IT projects that shape the future of digital and have a positive impact on environment. Wellbeing programs & work-life balance - integration and passion sharing events. Attractive Salary and Company Initiative Benefits Courses and conferences. Attractive Salary. Hybrid work culture.
Posted 21 hours ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
The airflow job market in India is rapidly growing as more companies are adopting data pipelines and workflow automation. Airflow, an open-source platform, is widely used for orchestrating complex computational workflows and data processing pipelines. Job seekers with expertise in airflow can find lucrative opportunities in various industries such as technology, e-commerce, finance, and more.
The average salary range for airflow professionals in India varies based on experience levels: - Entry-level: INR 6-8 lakhs per annum - Mid-level: INR 10-15 lakhs per annum - Experienced: INR 18-25 lakhs per annum
In the field of airflow, a typical career path may progress as follows: - Junior Airflow Developer - Airflow Developer - Senior Airflow Developer - Airflow Tech Lead
In addition to airflow expertise, professionals in this field are often expected to have or develop skills in: - Python programming - ETL concepts - Database management (SQL) - Cloud platforms (AWS, GCP) - Data warehousing
As you explore job opportunities in the airflow domain in India, remember to showcase your expertise, skills, and experience confidently during interviews. Prepare well, stay updated with the latest trends in airflow, and demonstrate your problem-solving abilities to stand out in the competitive job market. Good luck!
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
32455 Jobs | Dublin
Wipro
16590 Jobs | Bengaluru
EY
11025 Jobs | London
Accenture in India
10991 Jobs | Dublin 2
Amazon
8878 Jobs | Seattle,WA
Uplers
8715 Jobs | Ahmedabad
IBM
8204 Jobs | Armonk
Oracle
7750 Jobs | Redwood City
Capgemini
6181 Jobs | Paris,France
Muthoot FinCorp (MFL)
6170 Jobs | New Delhi