Jobs
Interviews

16420 Spark Jobs - Page 17

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

0 years

0 Lacs

Ghaziabad, Uttar Pradesh, India

On-site

About Us: Planet Spark is reshaping the EdTech landscape by equipping kids and young adults with future-ready skills like public speaking, and more. We're on a mission to spark curiosity, creativity, and confidence in learners worldwide. If you're passionate about meaningful impact, growth, and innovation—you're in the right place. Location: Gurgaon (On-site) Experience Level: Entry to Early Career (Freshers welcome!) Shift Options: Domestic | Middle East | International Working Days: 5 days/week (Wednesday & Thursday off) | Weekend availability required Target Joiners: Any (Bachelor’s or Master’s) 🔥 What You'll Be Owning (Your Impact): Lead Activation: Engage daily with high-intent leads through dynamic channels—calls, video consults, and more. Sales Funnel Pro: Own the full sales journey—from first hello to successful enrollment. Consultative Selling: Host personalized video consultations with parents/adult learners, pitch trial sessions, and resolve concerns with clarity and empathy. Target Slayer: Consistently crush weekly revenue goals and contribute directly to Planet Spark’s growth engine. Client Success: Ensure a smooth onboarding experience and transition for every new learner. Upskill Mindset: Participate in hands-on training, mentorship, and feedback loops to constantly refine your game. 💡 Why Join Sales at Planet Spark? Only Warm Leads: Skip the cold calls—our leads already know us and have completed a demo session. High-Performance Culture: Be part of a fast-paced, energetic team that celebrates success and rewards hustle. Career Fast-Track: Unlock rapid promotions, performance bonuses, and leadership paths. Top-Notch Training: Experience immersive onboarding, live role-plays, and access to ongoing L&D programs. Rewards & Recognition: Weekly shoutouts, cash bonuses, and exclusive events to celebrate your wins. Make Real Impact: Help shape the minds of tomorrow while building a powerhouse career today. 🎯 What You Bring to the Table: Communication Powerhouse: You can build trust and articulate ideas clearly in both spoken and written formats. Sales-Driven: You know how to influence decisions, navigate objections, and close deals with confidence. Empathy First: You genuinely care about clients’ goals and tailor your approach to meet them. Goal-Oriented: You’re self-driven, proactive, and hungry for results. Tech Fluent: Comfortable using CRMs, video platforms, and productivity tools. ✨ What’s in It for You? 💼 High-growth sales career with serious earning potential 🌱 Continuous upskilling in EdTech, sales, and communication 🧘 Supportive culture that values growth and well-being 🎯 Opportunity to work at the cutting edge of education innovation

Posted 1 day ago

Apply

3.0 - 5.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Experience: 3-5 years Looking for a skilled backend developer with strong experience in Java, Spring Boot, and Apache Spark. Responsible for building scalable microservices and processing large datasets in real-time or batch environments. Must have solid understanding of REST APIs, distributed systems, and data pipelines. Experience with cloud platforms (AWS/GCP) is a plus.

Posted 1 day ago

Apply

5.0 years

0 Lacs

Greater Chennai Area

On-site

Project Role : Application Developer Project Role Description : Design, build and configure applications to meet business process and application requirements. Must have skills : PySpark Good to have skills : Apache Spark Minimum 5 Year(s) Of Experience Is Required Educational Qualification : 15 years full time education Summary: As an Application Developer, you will design, build, and configure applications to meet business process and application requirements. You will be responsible for ensuring that the applications are developed according to the specified requirements and are aligned with the business goals. Your typical day will involve collaborating with the team to understand the application requirements, designing and developing the applications using PySpark, and configuring the applications to meet the business process needs. You will also be responsible for testing and debugging the applications to ensure their functionality and performance. Roles & Responsibilities: - Expected to be an SME, collaborate and manage the team to perform. - Responsible for team decisions. - Engage with multiple teams and contribute on key decisions. - Provide solutions to problems for their immediate team and across multiple teams. - Design and build applications using PySpark. - Configure applications to meet business process requirements. - Collaborate with the team to understand application requirements. - Test and debug applications to ensure functionality and performance. Professional & Technical Skills: - Must To Have Skills: Proficiency in PySpark. - Good To Have Skills: Experience with Apache Spark. - Strong understanding of statistical analysis and machine learning algorithms. - Experience with data visualization tools such as Tableau or Power BI. - Hands-on implementing various machine learning algorithms such as linear regression, logistic regression, decision trees, and clustering algorithms. - Solid grasp of data munging techniques, including data cleaning, transformation, and normalization to ensure data quality and integrity. Additional Information: - The candidate should have a minimum of 5 years of experience in PySpark. - This position is based at our Chennai office. - A 15 years full time education is required.

Posted 2 days ago

Apply

5.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Project Role : Application Lead Project Role Description : Lead the effort to design, build and configure applications, acting as the primary point of contact. Must have skills : Microsoft Azure Data Services Good to have skills : NA Minimum 5 Year(s) Of Experience Is Required Educational Qualification : 15 years full time education Summary: As an Application Lead, you will lead the effort to design, build, and configure applications, acting as the primary point of contact. Your typical day will involve collaborating with various teams to ensure project milestones are met, facilitating discussions to address challenges, and guiding your team through the development process. You will also engage in strategic planning sessions to align project goals with organizational objectives, ensuring that the applications developed meet both user needs and technical requirements. Your role will be pivotal in fostering a collaborative environment that encourages innovation and problem-solving among team members. Roles & Responsibilities: Minimum of 4 years of experience in data engineering or similar roles. Proven expertise with Databricks and data processing frameworks. Technical Skills SQL, Spark, Py spark, Databricks, Python, Scala, Spark SQL Strong understanding of data warehousing, ETL processes, and data pipeline design. Experience with SQL, Python, and Spark. Excellent problem-solving and analytical skills. Effective communication and teamwork abilities. Professional & Technical Skills: Experience and knowledge of Azure SQL Database, Azure Data Factory, ADLS Additional Information: - The candidate should have minimum 5 years of experience in Microsoft Azure Data Services. - This position is based in Pune. - A 15 year full time education is required.

Posted 2 days ago

Apply

10.0 years

0 Lacs

Dehradun, Uttarakhand, India

On-site

Key Responsibilities - Familiarity with modern storage formats like Parquet and ORC. Design and develop conceptual, logical, and physical data models to support enterprise data initiatives. Build, maintain, and optimize data models within Databricks Unity Catalog. Develop efficient data structures using Delta Lake, optimizing for performance, scalability, and reusability. Collaborate with data engineers, architects, analysts, and stakeholders to ensure data model alignment with ingestion pipelines and business goals. Translate business and reporting requirements into robust data architecture using best practices in data warehousing and Lakehouse design. Maintain comprehensive metadata artifacts including data dictionaries, data lineage, and modeling documentation. Enforce and support data governance, data quality, and security protocols across data ecosystems. Continuously evaluate and improve modeling processes and Skills and Experience : 10+ years of hands-on experience in data modeling in Big Data environments. Expertise in OLTP, OLAP, dimensional modeling, and enterprise data warehouse practices. Proficient in modeling methodologies including Kimball, Inmon, and Data Vault. Hands-on experience with modeling tools such as ER/Studio, ERwin, PowerDesigner, SQLDBM, dbt, or Lucidchart. Proven experience in Databricks with Unity Catalog and Delta Lake. Strong command of SQL and Apache Spark for querying and transformation. Hands-on experience with the Azure Data Platform, including : Azure Data Factory Azure Data Lake Storage Azure Synapse Analytics Azure SQL Database Exposure to Azure Purview or similar data cataloging tools. Strong communication and documentation skills, with the ability to work in cross-functional agile Qualifications : Bachelor's or Masters degree in Computer Science, Information Systems, Data Engineering, or related field. Certifications such as Microsoft DP-203: Data Engineering on Microsoft Azure. Experience working in agile/scrum environments. Exposure to enterprise data security and regulatory compliance frameworks (e.g., GDPR, HIPAA) is a plus. (ref:hirist.tech)

Posted 2 days ago

Apply

0 years

0 Lacs

India

Remote

Learning Community Catalyst (Remote, Paid) Duration: 3 months (with possible extension) Immediate hire About the Project: We're building a bold new kind of learning experience for early teens — one that cultivates critical thinking. This is a purpose-driven, globally collaborative project designed to move beyond traditional education models and spark lifelong curiosity in young learners. This isn’t just an internship — it’s an opportunity to co-create something meaningful from scratch. What You’ll Do: · Co-design workshop activities and toolkits. · Conduct research on global youth trends and engagement strategies · Collaborate in a small, creative, and impact-focused remote team · [Optional: Support outreach, storytelling, or digital strategy. What We’re Looking For: · Background in Learning & Development, Instructional Design, Educators, or experience working with purpose-driven projects. · Experience (or strong interest) in youth engagement or education innovation · Ability to own projects independently and deliver high-quality work · Strong written communication and creative problem-solving skills · Familiarity with tools like Canva, Google Workspace, Notion, or Miro is a plus( ability to learn required tools on the job) What You’ll Gain: · Real ownership is not a "busy work" internship · Hands-on experience design learning that matters · Exposure to global perspectives in youth development · A Letter of Recommendation upon successful completion How to Apply: Write us: sridharamurthymahesh@gmail.com 1. Your resume 2. A short note (50–150 words) on why this project speaks to you( No AI tool answers please ) We look forward to hearing from you!

Posted 2 days ago

Apply

0 years

0 Lacs

Gurgaon, Haryana, India

On-site

Job Description About KPMG in India KPMG entities in India are professional services firm(s). These Indian member firms are affiliated with KPMG International Limited. KPMG was established in India in August 1993. Our professionals leverage the global network of firms, and are conversant with local laws, regulations, markets and competition. KPMG has offices across India in Ahmedabad, Bengaluru, Chandigarh, Chennai, Gurugram, Jaipur, Hyderabad, Jaipur, Kochi, Kolkata, Mumbai, Noida, Pune, Vadodara and Vijayawada. KPMG entities in India offer services to national and international clients in India across sectors. We strive to provide rapid, performance-based, industry-focused and technology-enabled services, which reflect a shared knowledge of global and local industries and our experience of the Indian business environment. Data Architect (Analytics) – AD Location: NCR (Preferably) Job Summary: Data Architect will be responsible for designing and managing the data architecture for data analytics projects. This role involves ensuring the integrity, availability, and security of data, as well as optimizing data systems to support business intelligence and analytics needs. Key Responsibilities Design and implement data architecture solutions to support data analytics and business intelligence initiatives. Collaborate with stakeholders to understand data requirements and translate them into technical specifications. Design and implement data systems and infrastructure setups, ensuring scalability, security, and performance. Develop and maintain data models, data flow diagrams, and data dictionaries. Ensure data quality, consistency, and security across all data sources and systems. Optimize data storage and retrieval processes to enhance performance and scalability. Evaluate and recommend data management tools and technologies. Provide guidance and support to data engineers and analysts on best practices for data architecture. Conduct assessments of data systems to identify areas for improvement and optimization. Understanding of Government of India data governance policies and regulatory requirements. Hands-on in troubleshooting complex technical problems in production environments Equal employment opportunity information KPMG India has a policy of providing equal opportunity for all applicants and employees regardless of their color, caste, religion, age, sex/gender, national origin, citizenship, sexual orientation, gender identity or expression, disability or other legally protected status. KPMG India values diversity and we request you to submit the details below to support us in our endeavor for diversity. Providing the below information is voluntary and refusal to submit such information will not be prejudicial to you. Qualifications QUALIFICATIONS Bachelor's degree in Computer Science, Information Technology, Data Science, or a related field (Master's degree preferred). Proven experience as a Data Architect or in a similar role, with a focus on data analytics projects. Strong knowledge of data architecture frameworks and methodologies. Proficiency in database management systems (e.g., SQL, NoSQL), data warehousing, and ETL processes. Experience with big data technologies (e.g., Hadoop, Spark) and cloud platforms (e.g., AWS, Azure, Google Cloud). Certification in data architecture or related fields.

Posted 2 days ago

Apply

5.0 - 8.0 years

12 - 18 Lacs

Bengaluru

Work from Office

• Bachelor's degree in computer science, Information Technology, or a related field. • 3-5 years of experience in ETL development and data integration. • Proficiency in SQL and experience with relational databases such as Oracle, SQL Server, or MySQL. • Familiarity with data warehousing concepts and methodologies. • Hands-on experience with ETL tools like Informatica, Talend, SSIS, or similar. • Knowledge of data modeling and data governance best practices. • Strong analytical skills and attention to detail. • Excellent communication and teamwork skills. • Experience with Snowflake or willingness to learn and implement Snowflake-based solutions. • Experience with Big Data technologies such as Hadoop or Spark. • Knowledge of cloud platforms like AWS, Azure, or Google Cloud and their ETL services. • Familiarity with data visualization tools such as Tableau or Power BI. • Hands-on experience with Snowflake for data warehousing and analytics

Posted 2 days ago

Apply

0 years

0 Lacs

New Delhi, Delhi, India

On-site

Nivedan Foundation is a youth-led NGO , registered under Niti Aayog and Ministry of Social Justice & Empowerment , we believe in the power of young minds to spark lasting change. Driven by passion and purpose, we work tirelessly to uplift communities through women empowerment, sustainability, digital literacy, and education for underprivileged children. With every project, we aim to bridge gaps, break barriers, and build a brighter, more inclusive future—because change begins with a voice, and every voice matters. What is a Unit Head? A Unit Head is the Student leader who establishes and oversees a Nivedan Foundation Unit at college or institution level. They serve as the primary bridge between the foundation and the campus community, ensuring that Nivedan’s programs run smoothly and achieve real impact. Unit Head's key responsibilities will include (but are not limited to) : Establish a Nivedan unit within your college or institution. Recruit, onboard and mentor a team of student volunteers. Plan & execute welfare programs and community drives. Coordinate logistics, budgeting and resource allocation for all activities. Serve as the primary contact between your unit and the foundation. Organize regular team meetings and interactive session with volunteers. Ensure compliance with Nivedan’s goals and policies. Perks & Benefits: Leadership experience Certificate & Letter of Recommendation Mentorship from changemakers Impact beyond Limits We invite ‘YOU’ to join us on this path of transformation as Unit Head to ensure active participation of students from your college in our various initiatives Note: This is an unpaid Internship/Volunteership.

Posted 2 days ago

Apply

1.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Overview of 66degrees 66degrees is a leading consulting and professional services company specializing in developing AI-focused, data-led solutions leveraging the latest advancements in cloud technology. With our unmatched engineering capabilities and vast industry experience, we help the world's leading brands transform their business challenges into opportunities and shape the future of work. At 66degrees, we believe in embracing the challenge and winning together. These values not only guide us in achieving our goals as a company but also for our people. We are dedicated to creating a significant impact for our employees by fostering a culture that sparks innovation and supports professional and personal growth along the way. Overview of Role As a Data Engineer specializing in AI/ML, you'll be instrumental in designing, building, and maintaining the data infrastructure crucial for training, deploying, and serving our advanced AI and Machine Learning models. You'll work closely with Data Scientists, ML Engineers, and Cloud Architects to ensure data is accessible, reliable, and optimized for high-performance AI/ML workloads, primarily within the Google Cloud ecosystem. Responsibilities Data Pipeline Development: Design, build, and maintain robust, scalable, and efficient ETL/ELT data pipelines to ingest, transform, and load data from various sources into data lakes and data warehouses, specifically optimized for AI/ML consumption. AI/ML Data Infrastructure: Architect and implement the underlying data infrastructure required for machine learning model training, serving, and monitoring within GCP environments. Google Cloud Ecosystem: Leverage a broad range of Google Cloud Platform (GCP) data services including, BigQuery, Dataflow, Dataproc, Cloud Storage, Pub/Sub, Vertex AI, Composer (Airflow), and Cloud SQL. Data Quality & Governance: Implement best practices for data quality, data governance, data lineage, and data security to ensure the reliability and integrity of AI/ML datasets. Performance Optimization: Optimize data pipelines and storage solutions for performance, cost-efficiency, and scalability, particularly for large-scale AI/ML data processing. Collaboration with AI/ML Teams: Work closely with Data Scientists and ML Engineers to understand their data needs, prepare datasets for model training, and assist in deploying models into production. Automation & MLOps Support: Contribute to the automation of data pipelines and support MLOps initiatives, ensuring seamless integration from data ingestion to model deployment and monitoring. Troubleshooting & Support: Troubleshoot and resolve data-related issues within the AI/ML ecosystem, ensuring data availability and pipeline health. Documentation: Create and maintain comprehensive documentation for data architectures, pipelines, and data models. Qualifications 1-2+ years of experience in Data Engineering, with at least 2-3 years directly focused on building data pipelines for AI/ML workloads. Deep, hands-on experience with core GCP data services such as BigQuery, Dataflow, Dataproc, Cloud Storage, Pub/Sub, and Composer/Airflow. Strong proficiency in at least one relevant programming language for data engineering (Python is highly preferred).SQL skills for complex data manipulation, querying, and optimization. Solid understanding of data warehousing concepts, data modeling (dimensional, 3NF), and schema design for analytical and AI/ML purposes. Proven experience designing, building, and optimizing large-scale ETL/ELT processes. Familiarity with big data processing frameworks (e.g., Apache Spark, Hadoop) and concepts. Exceptional analytical and problem-solving skills, with the ability to design solutions for complex data challenges. Excellent verbal and written communication skills, capable of explaining complex technical concepts to both technical and non-technical stakeholders. 66degrees is an Equal Opportunity employer. All qualified applicants will receive consideration for employment without regard to actual or perceived race, color, religion, sex, gender, gender identity, national origin, age, weight, height, marital status, sexual orientation, veteran status, disability status or other legally protected class.

Posted 2 days ago

Apply

7.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Teamwork makes the stream work. Roku is changing how the world watches TV Roku is the #1 TV streaming platform in the U.S., Canada, and Mexico, and we've set our sights on powering every television in the world. Roku pioneered streaming to the TV. Our mission is to be the TV streaming platform that connects the entire TV ecosystem. We connect consumers to the content they love, enable content publishers to build and monetize large audiences, and provide advertisers unique capabilities to engage consumers. From your first day at Roku, you'll make a valuable - and valued - contribution. We're a fast-growing public company where no one is a bystander. We offer you the opportunity to delight millions of TV streamers around the world while gaining meaningful experience across a variety of disciplines. About the team Roku runs one of the largest data lakes in the world. We store over 70 PB of data, run 10+M queries per month, scan over 100 PB of data per month. Big Data team is the one responsible for building, running, and supporting the platform that makes this possible. We provide all the tools needed to acquire, generate, process, monitor, validate and access the data in the lake for both streaming data and batch. We are also responsible for generating the foundational data. The systems we provide include Scribe, Kafka, Hive, Presto, Spark, Flink, Pinot, and others. The team is actively involved in the Open Source, and we are planning to increase our engagement over time. About the Role Roku is in the process of modernizing its Big Data Platform. We are working on defining the new architecture to improve user experience, minimize the cost and increase efficiency. Are you interested in helping us build this state-of-the-art big data platform? Are you an expert with Big Data Technologies? Have you looked under the hood of these systems? Are you interested in Open Source? If you answered “Yes” to these questions, this role is for you! What you will be doing You will be responsible for streamlining and tuning existing Big Data systems and pipelines and building new ones. Making sure the systems run efficiently and with minimal cost is a top priority You will be making changes to the underlying systems and if an opportunity arises, you can contribute your work back into the open source You will also be responsible for supporting internal customers and on-call services for the systems we host. Making sure we provided stable environment and great user experience is another top priority for the team We are excited if you have 7+ years of production experience building big data platforms based upon Spark, Trino or equivalent Strong programming expertise in Java, Scala, Kotlin or another JVM language. A robust grasp of distributed systems concepts, algorithms, and data structures Strong familiarity with the Apache Hadoop ecosystem: Spark, Kafka, Hive/Iceberg/Delta Lake, Presto/Trino, Pinot, etc. Experience working with at least 3 of the technologies/tools mentioned here: Big Data / Hadoop, Kafka, Spark, Trino, Flink, Airflow, Druid, Hive, Iceberg, Delta Lake, Pinot, Storm etc Extensive hands-on experience with public cloud AWS or GCP BS/MS degree in CS or equivalent AI Literacy / AI growth mindset Benefits Roku is committed to offering a diverse range of benefits as part of our compensation package to support our employees and their families. Our comprehensive benefits include global access to mental health and financial wellness support and resources. Local benefits include statutory and voluntary benefits which may include healthcare (medical, dental, and vision), life, accident, disability, commuter, and retirement options (401(k)/pension). Our employees can take time off work for vacation and other personal reasons to balance their evolving work and life needs. It's important to note that not every benefit is available in all locations or for every role. For details specific to your location, please consult with your recruiter. The Roku Culture Roku is a great place for people who want to work in a fast-paced environment where everyone is focused on the company's success rather than their own. We try to surround ourselves with people who are great at their jobs, who are easy to work with, and who keep their egos in check. We appreciate a sense of humor. We believe a fewer number of very talented folks can do more for less cost than a larger number of less talented teams. We're independent thinkers with big ideas who act boldly, move fast and accomplish extraordinary things through collaboration and trust. In short, at Roku you'll be part of a company that's changing how the world watches TV. We have a unique culture that we are proud of. We think of ourselves primarily as problem-solvers, which itself is a two-part idea. We come up with the solution, but the solution isn't real until it is built and delivered to the customer. That penchant for action gives us a pragmatic approach to innovation, one that has served us well since 2002. To learn more about Roku, our global footprint, and how we've grown, visit https://www.weareroku.com/factsheet. By providing your information, you acknowledge that you have read our Applicant Privacy Notice and authorize Roku to process your data subject to those terms.

Posted 2 days ago

Apply

0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Introduction A career in IBM Consulting is rooted by long-term relationships and close collaboration with clients across the globe. You'll work with visionaries across multiple industries to improve the hybrid cloud and AI journey for the most innovative and valuable companies in the world. Your ability to accelerate impact and make meaningful change for your clients is enabled by our strategic partner ecosystem and our robust technology platforms across the IBM portfolio Your Role And Responsibilities As an Associate Software Developer at IBM, you'll work with clients to co-create solutions to major real-world challenges by using best practice technologies, tools,techniques, and products to translate system requirements into the design anddevelopment of customized systems Preferred Education Master's Degree Required Technical And Professional Expertise Strong proficiency in Java, Spring Framework, Spring boot, RESTful APIs, excellent understanding of OOP, Design Patterns. Strong knowledge of ORM tools like Hibernate or JPA, Java based Micro-services framework, Hands on experience on Spring boot Microservices, Primary Skills: - Core Java, Spring Boot, Java2/EE, Microservices- Hadoop Ecosystem (HBase, Hive, MapReduce, HDFS, Pig, Sqoop etc)- Spark Good to have Python. Strong knowledge of micro-service logging, monitoring, debugging and testing, In-depth knowledge of relational databases (e.g., MySQL) Experience in container platforms such as Docker and Kubernetes, experience in messaging platforms such as Kafka or IBM MQ, good understanding of Test-Driven-Development Familiar with Ant, Maven or other build automation for Java, Springboot, API, Microservices, Security Preferred Technical And Professional Experience Experience in Concurrent design and multi-threading Primary Skills: - Core Java, Spring Boot, Java2/EE, Microservices - Hadoop Ecosystem (HBase, Hive, MapReduce, HDFS, Pig, Sqoop etc) - Spark Good to have Python.

Posted 2 days ago

Apply

8.0 - 13.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Join Amgen’s Mission of Serving Patients At Amgen, if you feel like you’re part of something bigger, it’s because you are. Our shared mission—to serve patients living with serious illnesses—drives all that we do. Since 1980, we’ve helped pioneer the world of biotech in our fight against the world’s toughest diseases. With our focus on four therapeutic areas –Oncology, Inflammation, General Medicine, and Rare Disease– we reach millions of patients each year. As a member of the Amgen team, you’ll help make a lasting impact on the lives of patients as we research, manufacture, and deliver innovative medicines to help people live longer, fuller happier lives. Our award-winning culture is collaborative, innovative, and science based. If you have a passion for challenges and the opportunities that lay within them, you’ll thrive as part of the Amgen team. Join us and transform the lives of patients while transforming your career. Job Description What you will do As a Data Engineer, you will be responsible for designing, building, maintaining, analyzing, and interpreting data to provide actionable insights that drive business decisions. This role involves working with large datasets, developing reports, supporting and executing data initiatives and, visualizing data to ensure data is accessible, reliable, and efficiently managed. The ideal candidate has strong technical skills, experience with big data technologies, and a deep understanding of data architecture and ETL processes Roles & Responsibilities: Design, develop, and maintain data solutions for data generation, collection, and processing Be a key team member that assists in design and development of the data pipeline Create data pipelines and ensure data quality by implementing ETL processes to migrate and deploy data across systems Contribute to the design, development, and implementation of data pipelines, ETL/ELT processes, and data integration solutions Develop and maintain data models, data dictionaries, and other documentation to ensure data accuracy and consistency Implement data security and privacy measures to protect sensitive data Leverage cloud platforms (AWS preferred) to build scalable and efficient data solutions Collaborate and communicate effectively with product teams Collaborate with Data Architects, Business SMEs, and Data Scientists to design and develop end-to-end data pipelines to meet fast paced business needs across geographic regions Identify and resolve complex data-related challenges Adhere to best practices for coding, testing, and designing reusable code/component Explore new tools and technologies that will help to improve ETL platform performance Participate in sprint planning meetings and provide estimations on technical implementation What We Expect Of You We are all different, yet we all use our unique contributions to serve patients. Basic Qualifications: Doctorate degree / Master's degree / Bachelor's degree and 8 to 13 years of Computer Science, IT or related field experience. Must-Have Skills: Hands on experience with big data technologies and platforms, such as Databricks, Apache Spark (PySpark, SparkSQL),Snowflake, workflow orchestration, performance tuning on big data processing Experienced with software engineering best-practices, including but not limited to version control (Git, Subversion, etc.), CI/CD (Jenkins, Maven etc.), automated unit testing, and Dev Ops Proficient in SQL, Python for extracting, transforming, and analyzing complex datasets from relational data stores Proficient in Python with strong experience in ETL tools such as Apache Spark and various data processing packages, supporting scalable data workflows and machine learning pipeline development. Strong understanding of data modeling, data warehousing, and data integration concepts Proven ability to optimize query performance on big data platforms Knowledge on Data visualization and analytics tools like Spotfire, PowerBI Preferred Qualifications: Experience with Software engineering best-practices, including but not limited to version control, infrastructure-as-code, CI/CD, and automated testing Knowledge of Python/R, Databricks, SageMaker, cloud data platforms Strong knowledge on Oracle / SQL Server, Stored Procedure, PL/SQL, Knowledge on LINUX OS Experience in implementing Retrieval-Augmented Generation (RAG) pipelines, integrating retrieval mechanisms with language models. Skilled in developing machine learning models using Python, with hands-on experience in deep learning frameworks including PyTorch and TensorFlow. Strong understanding of data governance frameworks, tools, and best practices. Knowledge of vector databases, including implementation and optimization. Knowledge of data protection regulations and compliance requirements (e.g., GDPR, CCPA) Professional Certifications: Databricks Certificate preferred AWS Data Engineer/Architect Soft Skills: Excellent critical-thinking and problem-solving skills Strong communication and collaboration skills Demonstrated awareness of how to function in a team setting Demonstrated presentation skills What You Can Expect Of Us As we work to develop treatments that take care of others, we also work to care for your professional and personal growth and well-being. From our competitive benefits to our collaborative culture, we’ll support your journey every step of the way. In addition to the base salary, Amgen offers competitive and comprehensive Total Rewards Plans that are aligned with local industry standards. Apply now and make a lasting impact with the Amgen team. careers.amgen.com As an organization dedicated to improving the quality of life for people around the world, Amgen fosters an inclusive environment of diverse, ethical, committed and highly accomplished people who respect each other and live the Amgen values to continue advancing science to serve patients. Together, we compete in the fight against serious disease. Amgen is an Equal Opportunity employer and will consider all qualified applicants for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, disability status, or any other basis protected by applicable law. We will ensure that individuals with disabilities are provided reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request accommodation.

Posted 2 days ago

Apply

6.0 - 10.0 years

0 Lacs

India

On-site

6-10 years of overall experience mainly in the data engineering space building data ingestion and transformation pipelines. Design and implement data pipelines using best practices and industry-leading tools like Databricks and Azure Data Factory (ADF) Extract, transform, and load large datasets from various sources, ensuring data quality and integrity Utilize Python and Spark to perform complex data manipulations and aggregations Integrate data pipelines with APIs and external systems using efficient methods Experience in Cleansing and Azure Data Explorer workflows Monitor and maintain data pipelines, ensuring smooth operation and identifying potential issues Collaborate with data scientists, analysts, and engineers to understand data needs and deliver valuable insights Must have implementation experience of Azure based cloud Data project/program as solution architect. Expert level knowledge on Spark. Extensive hands-on experience working with data using SQL, Python, Java, Scala Strong experience and understanding of very large-scale data architecture, solutioning, and operationalization of data warehouses, data lakes, and analytics platforms. (Both Cloud and On-Premise) Excellent communications skills with the ability to clearly present ideas, concepts, and solutions Bonus points if you have experience on Insurance Domain for data engineering project. --- The following are added for data quality/catalog Strong attention to detail and accuracy, knowledge of data quality frameworks and standards, experience with data profiling and data analysis tools, understanding of data management best practices and excellent communication and collaboration skills.

Posted 2 days ago

Apply

40.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

[Role Name : IS Architecture] Job Posting Title: Data Architect Workday Job Profile : Principal IS Architect Department Name: Digital, Technology & Innovation Role GCF: 06A About Amgen Amgen harnesses the best of biology and technology to fight the world’s toughest diseases, and make people’s lives easier, fuller and longer. We discover, develop, manufacture and deliver innovative medicines to help millions of patients. Amgen helped establish the biotechnology industry more than 40 years ago and remains on the cutting-edge of innovation, using technology and human genetic data to push beyond what’s known today. About The Role Role Description: The role is responsible for developing and maintaining the data architecture of the Enterprise Data Fabric. Data Architecture includes the activities required for data flow design, data modeling, physical data design, query performance optimization. The Data Architect is a senior-level position responsible for developing business information models by studying the business, our data, and the industry. This role involves creating data models to realize a connected data ecosystem that empowers consumers. The Data Architect drives cross-functional data interoperability, enables efficient decision-making, and supports AI usage of Foundational Data. This role will manage a team of Data Modelers. Roles & Responsibilities: Provide oversight to data modeling team members. Develop and maintain conceptual logical, and physical data models and to support business needs Establish and enforce data standards, governance policies, and best practices Design and manage metadata structures to enhance information retrieval and usability Maintain comprehensive documentation of the architecture, including principles, standards, and models Evaluate and recommend technologies and tools that best fit the solution requirements Evaluate emerging technologies and assess their potential impact. Drive continuous improvement in the architecture by identifying opportunities for innovation and efficiency Basic Qualifications and Experience: [GCF Level 6A] Doctorate Degree and 8 years of experience in Computer Science, IT or related field OR Master’s degree with 12 - 15 years of experience in Computer Science, IT or related field OR Bachelor’s degree with 14 - 17 years of experience in Computer Science, IT or related field Functional Skills: Must-Have Skills : Data Modeling: Expert in creating conceptual, logical, and physical data models to represent information structures. Ability to interview and communicate with business Subject Matter experts to develop data models that are useful for their analysis needs. Metadata Management: Knowledge of metadata standards, taxonomies, and ontologies to ensure data consistency and quality. Information Governance: Familiarity with policies and procedures for managing information assets, including security, privacy, and compliance. Hands on experience with big data technologies and platforms, such as Databricks, Apache Spark (PySpark, SparkSQL), performance tuning on big data processing Good-to-Have Skills: Experience with Graph technologies such as Stardog, Allegrograph, Marklogic Professional Certifications Certifications in Databricks are desired Soft Skills: Excellent critical-thinking and problem-solving skills Strong communication and collaboration skills Demonstrated awareness of how to function in a team setting Demonstrated awareness of presentation skills Shift Information: This position requires you to work a later shift and may be assigned a second or third shift schedule. Candidates must be willing and able to work during evening or night shifts, as required based on business requirements. EQUAL OPPORTUNITY STATEMENT Amgen is an Equal Opportunity employer and will consider you without regard to your race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, or disability status. We will ensure that individuals with disabilities are provided with reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request accommodation.

Posted 2 days ago

Apply

5.0 - 9.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Join Amgen’s Mission of Serving Patients At Amgen, if you feel like you’re part of something bigger, it’s because you are. Our shared mission—to serve patients living with serious illnesses—drives all that we do. Since 1980, we’ve helped pioneer the world of biotech in our fight against the world’s toughest diseases. With our focus on four therapeutic areas –Oncology, Inflammation, General Medicine, and Rare Disease– we reach millions of patients each year. As a member of the Amgen team, you’ll help make a lasting impact on the lives of patients as we research, manufacture, and deliver innovative medicines to help people live longer, fuller happier lives. Our award-winning culture is collaborative, innovative, and science based. If you have a passion for challenges and the opportunities that lay within them, you’ll thrive as part of the Amgen team. Join us and transform the lives of patients while transforming your career. What You Will Do The Sr Associate Software Engineer is responsible for designing, developing, and maintaining software applications and solutions that meet business needs and ensuring the availability and performance of critical systems and applications. This role involves working closely with product managers, designers, data engineers, and other engineers to create high-quality, scalable software solutions and automating operations, monitoring system health, and responding to incidents to minimize downtime. Roles & Responsibilities: Possesses strong rapid prototyping skills and can quickly translate concepts into working code Contribute to both front-end and back-end development using cloud technology Develop innovative solution using generative AI technologies Ensure code quality and adherence to best practices Create and maintain documentation on software architecture, design, deployment, disaster recovery, and operations Identify and resolve technical challenges effectively Stay updated with the latest trends and advancements Work closely with product team, business team, and other stakeholders Design, develop, and implement applications and modules, including custom reports, interfaces, and enhancements Analyze and understand the functional and technical requirements of applications, solutions and systems and translate them into software architecture and design specifications Develop and execute unit tests, integration tests, and other testing strategies to ensure the quality of the software Identify and resolve software bugs and performance issues Work closely with cross-functional teams, including product management, design, and QA, to deliver high-quality software on time Customize modules to meet specific business requirements Work on integrating with other systems and platforms to ensure seamless data flow and functionality Provide ongoing support and maintenance for applications, ensuring that they operate smoothly and efficiently What We Expect Of You We are all different, yet we all use our unique contributions to serve patients. Basic Qualifications: Master's degree / Bachelor's degree and 5 to 9 years of Computer Science, IT or related field experience. Must-Have Skills: Proficiency in Python/PySpark development, Flask/Fast API, C#, ASP.net, PostgreSQL, Oracle, Databricks, DevOps Tools, CI/CD, Data Ingestion. Candidates should be able to write clean, efficient, and maintainable code. Knowledge of HTML, CSS, and JavaScript, along with popular front-end frameworks like React or Angular, is required to build interactive and responsive web applications In-depth knowledge of data engineering concepts, ETL processes, and data architecture principles. Strong understanding of cloud computing principles, particularly within the AWS ecosystem Strong understanding of software development methodologies, including Agile and Scrum Experience with version control systems like Git Hands on experience with various cloud services, understand pros and cons of various cloud service in well architected cloud design principles Strong problem solving, analytical skills; Ability to learn quickly; Excellent communication and interpersonal skills Experienced with API integration, serverless, microservices architecture. Experience in SQL/NOSQL database, vector database for large language models Preferred Qualifications: Strong understanding of cloud platforms (e.g., AWS, GCP, Azure) and containerization technologies (e.g., Docker, Kubernetes) Experience with monitoring and logging tools (e.g., Prometheus, Grafana, Splunk) Experience with data processing tools like Spark, or similar Experience with SAP integration technologies Soft Skills: Excellent analytical and troubleshooting skills Strong verbal and written communication skills Ability to work effectively with global, virtual teams High degree of initiative and self-motivation Ability to manage multiple priorities successfully Team-oriented, with a focus on achieving team goals Strong presentation and public speaking skills What You Can Expect Of Us As we work to develop treatments that take care of others, we also work to care for your professional and personal growth and well-being. From our competitive benefits to our collaborative culture, we’ll support your journey every step of the way. In addition to the base salary, Amgen offers competitive and comprehensive Total Rewards Plans that are aligned with local industry standards. Apply now and make a lasting impact with the Amgen team. careers.amgen.com As an organization dedicated to improving the quality of life for people around the world, Amgen fosters an inclusive environment of diverse, ethical, committed and highly accomplished people who respect each other and live the Amgen values to continue advancing science to serve patients. Together, we compete in the fight against serious disease. Amgen is an Equal Opportunity employer and will consider all qualified applicants for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, disability status, or any other basis protected by applicable law. We will ensure that individuals with disabilities are provided reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request accommodation.

Posted 2 days ago

Apply

5.0 - 9.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Join Amgen’s Mission of Serving Patients At Amgen, if you feel like you’re part of something bigger, it’s because you are. Our shared mission—to serve patients living with serious illnesses—drives all that we do. Since 1980, we’ve helped pioneer the world of biotech in our fight against the world’s toughest diseases. With our focus on four therapeutic areas –Oncology, Inflammation, General Medicine, and Rare Disease– we reach millions of patients each year. As a member of the Amgen team, you’ll help make a lasting impact on the lives of patients as we research, manufacture, and deliver innovative medicines to help people live longer, fuller happier lives. Our award-winning culture is collaborative, innovative, and science based. If you have a passion for challenges and the opportunities that lay within them, you’ll thrive as part of the Amgen team. Join us and transform the lives of patients while transforming your career. What You Will Do As a Data Engineer, you will be responsible for designing, building, maintaining, analyzing, and interpreting data to provide actionable insights that drive business decisions. This role involves working with large datasets, developing reports, supporting and executing data initiatives and, visualizing data to ensure data is accessible, reliable, and efficiently managed. The ideal candidate has strong technical skills, experience with big data technologies, and a deep understanding of data architecture and ETL processes Roles & Responsibilities: Design, develop, and maintain data solutions for data generation, collection, and processing Be a key team member that assists in design and development of the data pipeline Create data pipelines and ensure data quality by implementing ETL processes to migrate and deploy data across systems Contribute to the design, development, and implementation of data pipelines, ETL/ELT processes, and data integration solutions Develop and maintain data models, data dictionaries, and other documentation to ensure data accuracy and consistency Implement data security and privacy measures to protect sensitive data Leverage cloud platforms (AWS preferred) to build scalable and efficient data solutions Collaborate and communicate effectively with product teams Collaborate with Data Architects, Business SMEs, and Data Scientists to design and develop end-to-end data pipelines to meet fast paced business needs across geographic regions Identify and resolve complex data-related challenges Adhere to best practices for coding, testing, and designing reusable code/component Explore new tools and technologies that will help to improve ETL platform performance Participate in sprint planning meetings and provide estimations on technical implementation What We Expect Of You We are all different, yet we all use our unique contributions to serve patients. Basic Qualifications: Master's degree / Bachelor's degree and 5 to 9 years of Computer Science, IT or related field experience. Must-Have Skills: Hands on experience with big data technologies and platforms, such as Databricks, Apache Spark (PySpark, SparkSQL),Snowflake, workflow orchestration, performance tuning on big data processing Proficiency in data analysis tools (eg. SQL) Proficient in SQL for extracting, transforming, and analyzing complex datasets from relational data stores Experience with ETL tools such as Apache Spark, and various Python packages related to data processing, machine learning model development Strong understanding of data modeling, data warehousing, and data integration concepts Proven ability to optimize query performance on big data platforms Preferred Qualifications: Experience with Software engineering best-practices, including but not limited to version control, infrastructure-as-code, CI/CD, and automated testing Knowledge of Python/R, Databricks, SageMaker, cloud data platforms Strong knowledge on Oracle / SQL Server, Stored Procedure, PL/SQL, Knowledge on LINUX OS Knowledge on Data visualization and analytics tools like Spotfire, PowerBI Strong understanding of data governance frameworks, tools, and best practices. Knowledge of data protection regulations and compliance requirements (e.g., GDPR, CCPA) Professional Certifications: Databricks Certificate preferred AWS Data Engineer/Architect Soft Skills: Excellent critical-thinking and problem-solving skills Strong communication and collaboration skills Demonstrated awareness of how to function in a team setting Demonstrated presentation skills What You Can Expect Of Us As we work to develop treatments that take care of others, we also work to care for your professional and personal growth and well-being. From our competitive benefits to our collaborative culture, we’ll support your journey every step of the way. In addition to the base salary, Amgen offers competitive and comprehensive Total Rewards Plans that are aligned with local industry standards. Apply now and make a lasting impact with the Amgen team. careers.amgen.com As an organization dedicated to improving the quality of life for people around the world, Amgen fosters an inclusive environment of diverse, ethical, committed and highly accomplished people who respect each other and live the Amgen values to continue advancing science to serve patients. Together, we compete in the fight against serious disease. Amgen is an Equal Opportunity employer and will consider all qualified applicants for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, disability status, or any other basis protected by applicable law. We will ensure that individuals with disabilities are provided reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request accommodation.

Posted 2 days ago

Apply

5.0 - 9.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Join Amgen’s Mission of Serving Patients At Amgen, if you feel like you’re part of something bigger, it’s because you are. Our shared mission—to serve patients living with serious illnesses—drives all that we do. Since 1980, we’ve helped pioneer the world of biotech in our fight against the world’s toughest diseases. With our focus on four therapeutic areas –Oncology, Inflammation, General Medicine, and Rare Disease– we reach millions of patients each year. As a member of the Amgen team, you’ll help make a lasting impact on the lives of patients as we research, manufacture, and deliver innovative medicines to help people live longer, fuller happier lives. Our award-winning culture is collaborative, innovative, and science based. If you have a passion for challenges and the opportunities that lay within them, you’ll thrive as part of the Amgen team. Join us and transform the lives of patients while transforming your career. What You Will Do As a Data Engineer, you will be responsible for designing, building, maintaining, analyzing, and interpreting data to provide actionable insights that drive business decisions. This role involves working with large datasets, developing reports, supporting and executing data initiatives and, visualizing data to ensure data is accessible, reliable, and efficiently managed. The ideal candidate has strong technical skills, experience with big data technologies, and a deep understanding of data architecture and ETL processes Design, develop, and maintain data solutions for data generation, collection, and processing Be a key team member that assists in design and development of the data pipeline Create data pipelines and ensure data quality by implementing ETL processes to migrate and deploy data across systems Contribute to the design, development, and implementation of data pipelines, ETL/ELT processes, and data integration solutions Develop and maintain data models, data dictionaries, and other documentation to ensure data accuracy and consistency Implement data security and privacy measures to protect sensitive data Leverage cloud platforms (AWS preferred) to build scalable and efficient data solutions Collaborate and communicate effectively with product teams Collaborate with Data Architects, Business SMEs, and Data Scientists to design and develop end-to-end data pipelines to meet fast paced business needs across geographic regions Identify and resolve complex data-related challenges Adhere to best practices for coding, testing, and designing reusable code/component Explore new tools and technologies that will help to improve ETL platform performance Participate in sprint planning meetings and provide estimations on technical implementation What We Expect Of You We are all different, yet we all use our unique contributions to serve patients. Basic Qualifications: Master's degree / Bachelor's degree and 5 to 9 years of Computer Science, IT or related field experience. Must-Have Skills: Hands on experience with big data technologies and platforms, such as Databricks, Apache Spark (PySpark, SparkSQL),Snowflake, workflow orchestration, performance tuning on big data processing Proficiency in data analysis tools (eg. SQL) Proficient in SQL for extracting, transforming, and analyzing complex datasets from relational data stores Experience with ETL tools such as Apache Spark, and various Python packages related to data processing, machine learning model development Strong understanding of data modeling, data warehousing, and data integration concepts Proven ability to optimize query performance on big data platforms Preferred Qualifications: Experience with Software engineering best-practices, including but not limited to version control, infrastructure-as-code, CI/CD, and automated testing Knowledge of Python/R, Databricks, SageMaker, cloud data platforms Strong knowledge on Oracle / SQL Server, Stored Procedure, PL/SQL, Knowledge on LINUX OS Knowledge on Data visualization and analytics tools like Spotfire, PowerBI Strong understanding of data governance frameworks, tools, and best practices. Knowledge of data protection regulations and compliance requirements (e.g., GDPR, CCPA) Professional Certifications: Databricks Certificate preferred AWS Data Engineer/Architect Soft Skills: Excellent critical-thinking and problem-solving skills Strong communication and collaboration skills Demonstrated awareness of how to function in a team setting Demonstrated presentation skills What You Can Expect Of Us As we work to develop treatments that take care of others, we also work to care for your professional and personal growth and well-being. From our competitive benefits to our collaborative culture, we’ll support your journey every step of the way. In addition to the base salary, Amgen offers competitive and comprehensive Total Rewards Plans that are aligned with local industry standards. Apply now and make a lasting impact with the Amgen team. careers.amgen.com As an organization dedicated to improving the quality of life for people around the world, Amgen fosters an inclusive environment of diverse, ethical, committed and highly accomplished people who respect each other and live the Amgen values to continue advancing science to serve patients. Together, we compete in the fight against serious disease. Amgen is an Equal Opportunity employer and will consider all qualified applicants for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, disability status, or any other basis protected by applicable law. We will ensure that individuals with disabilities are provided reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request accommodation.

Posted 2 days ago

Apply

0 years

0 Lacs

Tiruchirappalli, Tamil Nadu, India

On-site

Company: ADRIG AI Technologies is a dynamic service-based company specializing in web development, artificial intelligence, game development, and tech talent acquisition. We deliver end-to-end solutions — from responsive websites and AI-powered tools to full-fledged game engines and skilled tech professionals for global projects. This opportunity is part of ADRIG’s newly launched Edutech vertical, FutureMinds — an initiative aimed at transforming school-level AI education through immersive, real-world experiences. FutureMinds blends cutting-edge technology with engaging communication to spark curiosity and confidence in the next generation of innovators. Locations: Thiruvallur (Perambakkam, Polivakkam, Pallipet) Trichy (Yagapuram) Sivagangai (Kodikottai) Kanyakumari (Aralvaimozhi, Manavalakurichi) Job Type: Part-Time | Short-Term Engagement (2 months) Workload & Compensation: 2 to 8 hours of work per week Each session is 2 hours long ₹1,000 per session Immediate Joiners Preferred About the Opportunity: We are inviting applications for a part-time role ideal for individuals with both technical proficiency and exceptional communication skills. Candidates should have a background in Computer Science and Engineering (B.Tech CSE) and be confident public speakers capable of clearly and engagingly articulating AI, math, and technology concepts. Key Responsibilities: Deliver short, structured sessions in English Simplify and present technical topics (AI, tech, math) in an engaging way Communicate effectively with school-age learners in a structured offline setting Represent the organization with energy, clarity, and professionalism Qualifications: B.Tech in Computer Science or related field (required) Excellent spoken English; prior public speaking or anchoring experience preferred Basic understanding of AI and mathematics Strong interpersonal skills and stage presence Experience as an RJ, emcee, communicator, educator, or content presenter is a plus How to Apply: Interested candidates may message us directly on LinkedIn with their contact information.

Posted 2 days ago

Apply

3.0 years

0 Lacs

India

Remote

We are seeking a skilled Sr Azure Data Engineer with hands-on experience in modern data engineering tools and platforms within the Azure ecosystem . The ideal candidate will have a strong foundation in data integration, transformation, and migration , along with a passion for working on complex data migration projects . Job Title: Sr. Azure Data Engineer Location: Remote work Work Timings: 2:00 PM – 11:00 PM IST No of Openings: 2 Please Note: This is a pure Azure-specific role . If your expertise is primarily in AWS or GCP , we kindly request that you do not apply . Lead the migration of large-scale SQL workloads from on-premise environments to Azure, ensuring high data integrity, minimal downtime, and performance optimization. Design, develop, and manage end-to-end data pipelines using Azure Data Factory or Synapse Data Factory to orchestrate migration and ETL processes. Build and administer scalable, secure Azure Data Lakes to store and manage structured and unstructured data during and post-migration. Utilize Azure Databricks , Synapse Spark Pools , Python , and PySpark for advanced data transformation and processing. Develop and fine-tune SQL/T-SQL scripts for data extraction, cleansing, transformation, and reporting in Azure SQL Database , SQL Managed Instances , and SQL Server . Design and maintain ETL solutions using SQL Server Integration Services (SSIS) , including reengineering SSIS packages for Azure compatibility. Collaborate with cloud architects, DBAs, and application teams to assess existing workloads and define the best migration approach. Continuously monitor and optimize data workflows for performance, reliability, and cost-effectiveness across Azure platforms. Enforce best practices in data governance, security, and compliance throughout the migration lifecycle. Required Skills and Qualifications: 3+ years of hands-on experience in data engineering , with a clear focus on SQL workload migration to Azure . Deep expertise in: Azure Data Factory / Synapse Data Factory, Azure Data Lake, Azure Databricks / Synapse Spark Pools, Python and PySpark, SQL SSIS – design, development, and migration to Azure Proven track record of delivering complex data migration projects (on-prem to Azure, or cloud-to-cloud). Experience re-platforming or re-engineering SSIS packages for Azure Data Factory or Azure-SSIS Integration Runtime. Microsoft Certified: Azure Data Engineer Associate or similar certification preferred. Strong problem-solving skills, attention to detail, and ability to work in fast-paced environments. Excellent communication skills with the ability to collaborate across teams and present migration strategies to stakeholders. If you believe you are qualified and are looking forward to setting your career on a fast-track, apply by submitting a few paragraphs explaining why you believe you are the right person for this role. To know more about Techolution, visit our website: www.techolution.com If you believe you are qualified and are looking forward to setting your career on a fast-track, apply by submitting a few paragraphs explaining why you believe you are the right person for this role.To know more about Techolution, visit our website: www.techolution.com About Techolution: Techolution is a next gen AI consulting firm on track to become one of the most admired brands in the world for "AI done right". Our purpose is to harness our expertise in novel technologies to deliver more profits for our enterprise clients while helping them deliver a better human experience for the communities they serve. At Techolution, we build custom AI solutions that produce revolutionary outcomes for enterprises worldwide. Specializing in "AI Done Right," we leverage our expertise and proprietary IP to transform operations and help achieve business goals efficiently. We are honored to have recently received the prestigious Inc 500 Best In Business award , a testament to our commitment to excellence. We were also awarded - AI Solution Provider of the Year by The AI Summit 2023, Platinum sponsor at Advantage DoD 2024 Symposium and a lot more exciting stuff! While we are big enough to be trusted by some of the greatest brands in the world, we are small enough to care about delivering meaningful ROI-generating innovation at a guaranteed price for each client that we serve. Our thought leader, Luv Tulsidas, wrote and published a book in collaboration with Forbes, “Failing Fast? Secrets to succeed fast with AI”. Refer here for more details on the content - https://www.luvtulsidas.com/ Let's explore further! Uncover our unique AI accelerators with us: 1. Enterprise LLM Studio : Our no-code DIY AI studio for enterprises. Choose an LLM, connect it to your data, and create an expert-level agent in 20 minutes. 2. AppMod. AI : Modernizes ancient tech stacks quickly, achieving over 80% autonomy for major brands! 3. ComputerVision. AI : Our ComputerVision. AI Offers customizable Computer Vision and Audio AI models, plus DIY tools and a Real-Time Co-Pilot for human-AI collaboration! 4. Robotics and Edge Device Fabrication : Provides comprehensive robotics, hardware fabrication, and AI-integrated edge design services. 5. RLEF AI Platform : Our proven Reinforcement Learning with Expert Feedback (RLEF) approach bridges Lab-Grade AI to Real-World AI. Some videos you wanna watch! Computer Vision demo at The AI Summit New York 2023 Life at Techolution GoogleNext 2023 Ai4 - Artificial Intelligence Conferences 2023 WaWa - Solving Food Wastage Saving lives - Brooklyn Hospital Innovation Done Right on Google Cloud Techolution featured on Worldwide Business with KathyIreland Techolution presented by ION World’s Greatest Visit us @ www.techolution.com : To know more about our revolutionary core practices and getting to know in detail about how we enrich the human experience with technology.

Posted 2 days ago

Apply

0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

HCL Tech - Mega Walkin Drive Bulk Hiring for Freshers _ 28thJuly & 29th July Designation: Customer Service Associate Any Freshers can walk in for the interviews. (Arts and Science, MBA, MA, MSC, M.com Also can apply) 2025 Passed outs can walk in with final semester result copy is mandatory. Educational Qualification: Graduate in any stream. *Engineering graduates will not be considered* Shift: Us Shifts Mode of interview: Walkin Date:28th & 29th July Timing: 11:00AM to 2:00 PM Contact HR: Freddy & Pradeep Work Location: Sholinganallur Interview Location: HCL TECH, Sholinganallur ELCOT campus, Tower 4, Chennai-119 You can also refer your friends for this role. Perks and Benefits : MNC Cab facility (two way) (Upto 30 Kms Only) Salary : great in the industry Excellent working environment Free Cab for female employees International Trainers World class exposure How You'll Grow At HCL Tech, we offer continuous opportunities for you to find your spark and grow with us. We want you to be happy and satisfied with your role and to really learn what type of work sparks your brilliance the best. Throughout your time with us, we offer transparent communication with senior-level employees, learning and career development programs at every level, and opportunities to experiment in different roles or even pivot industries. We believe that you should be in control of your career with unlimited opportunities to find the role that fits you best. Why Us We are one of the fastest-growing large tech companies in the world, with offices in 60+ countries across the globe and 222,000 employees. Our company is extremely diverse with 165 nationalities represented. We offer the opportunity to work with colleagues across the globe. We offer a virtual-first work environment, promoting a good work-life integration and real flexibility. We are invested in your growth, offering learning and career development opportunities at every level to help you find your own unique spark.

Posted 2 days ago

Apply

6.0 - 7.0 years

15 - 17 Lacs

India

On-site

About The Opportunity This role is within the fast-paced enterprise technology and data engineering sector, delivering high-impact solutions in cloud computing, big data, and advanced analytics. We design, build, and optimize robust data platforms powering AI, BI, and digital products for leading Fortune 500 clients across industries such as finance, retail, and healthcare. As a Senior Data Engineer, you will play a key role in shaping scalable, production-grade data solutions with modern cloud and data technologies. Role & Responsibilities Architect and Develop Data Pipelines: Design and implement end-to-end data pipelines (ingestion → transformation → consumption) using Databricks, Spark, and cloud object storage. Data Warehouse & Data Mart Design: Create scalable data warehouses/marts that empower self-service analytics and machine learning workloads. Database Modeling & Optimization: Translate logical models into efficient physical schemas, ensuring optimal partitioning and performance management. ETL/ELT Workflow Automation: Build, automate, and monitor robust data ingestion and transformation processes with best practices in reliability and observability. Performance Tuning: Optimize Spark jobs and SQL queries through careful tuning of configurations, indexing strategies, and resource management. Mentorship and Continuous Improvement: Provide production support, mentor team members, and champion best practices in data engineering and DevOps methodology. Skills & Qualifications Must-Have 6-7 years of hands-on experience building production-grade data platforms, including at least 3 years with Apache Spark/Databricks. Expert proficiency in PySpark, Python, and advanced SQL with a record of performance tuning distributed jobs. Proven expertise in data modeling, data warehouse/mart design, and managing ETL/ELT pipelines using tools like Airflow or dbt. Hands-on experience with major cloud platforms such as AWS or Azure, and familiarity with modern lakehouse/data-lake patterns. Strong analytical, problem-solving, and mentoring skills with a DevOps mindset and commitment to code quality. Preferred Experience with AWS analytics services (Redshift, Glue, S3) or the broader Hadoop ecosystem. Bachelor's or Master's degree in Computer Science, Engineering, or a related field. Exposure to streaming pipelines (Kafka, Kinesis, Delta Live Tables) and real-time analytics solutions. Familiarity with ML feature stores, MLOps workflows, or data governance frameworks. Relevant certifications (Databricks, AWS, Azure) or active contributions to open source projects. Location: India | Employment Type: Fulltime Skills: agile methodologies,team leadership,performance tuning,sql,elt,airflow,aws,data modeling,apache spark,pyspark,data,hadoop,databricks,python,dbt,big data technologies,etl,azure

Posted 2 days ago

Apply

7.0 years

15 - 17 Lacs

India

Remote

Note: This is a remote role with occasional office visits. Candidates from Mumbai or Pune will be preferred About The Company A fast-growing enterprise technology consultancy operating at the intersection of cloud computing, big-data engineering and advanced analytics . The team builds high-throughput, real-time data platforms that power AI, BI and digital products for Fortune 500 clients across finance, retail and healthcare. By combining Databricks Lakehouse architecture with modern DevOps practices, they unlock insight at petabyte scale while meeting stringent security and performance SLAs. Role & Responsibilities Architect end-to-end data pipelines (ingestion → transformation → consumption) using Databricks, Spark and cloud object storage. Design scalable data warehouses/marts that enable self-service analytics and ML workloads. Translate logical data models into physical schemas; own database design, partitioning and lifecycle management for cost-efficient performance. Implement, automate and monitor ETL/ELT workflows, ensuring reliability, observability and robust error handling. Tune Spark jobs and SQL queries, optimizing cluster configurations and indexing strategies to achieve sub-second response times. Provide production support and continuous improvement for existing data assets, championing best practices and mentoring peers. Skills & Qualifications Must-Have 6–7 years building production-grade data platforms, including 3 years+ hands-on Apache Spark/Databricks experience. Expert proficiency in PySpark, Python and advanced SQL, with a track record of performance-tuning distributed jobs. Demonstrated ability to model data warehouses/marts and orchestrate ETL/ELT pipelines with tools such as Airflow or dbt. Hands-on with at least one major cloud platform (AWS or Azure) and modern lakehouse / data-lake patterns. Strong problem-solving skills, DevOps mindset and commitment to code quality; comfortable mentoring fellow engineers. Preferred Deep familiarity with the AWS analytics stack (Redshift, Glue, S3) or the broader Hadoop ecosystem. Bachelor’s or Master’s degree in Computer Science, Engineering or a related field. Experience building streaming pipelines (Kafka, Kinesis, Delta Live Tables) and real-time analytics solutions. Exposure to ML feature stores, MLOps workflows and data-governance/compliance frameworks. Relevant professional certifications (Databricks, AWS, Azure) or notable open-source contributions. Benefits & Culture Highlights Remote-first & flexible hours with 25+ PTO days and comprehensive health cover. Annual training budget & certification sponsorship (Databricks, AWS, Azure) to fuel continuous learning. Inclusive, impact-focused culture where engineers shape the technical roadmap and mentor a vibrant data community Skills: data modeling,big data technologies,team leadership,aws,data,sql,agile methodologies,performance tuning,elt,airflow,apache spark,pyspark,hadoop,databricks,python,dbt,etl,azure

Posted 2 days ago

Apply

0 years

0 Lacs

Pune, Maharashtra, India

Remote

Unlock Your Creative Potential: Join the Revolution in Hair Perfume as a Star Intern! Imagine being at the forefront of a groundbreaking beauty brand that's redefining how the world experiences scent and style—right from the heart of innovation! We're an exciting new hair perfume startup, fusing luxurious fragrances with cutting-edge hair care to create products that turn heads and ignite senses. If you're a visionary creative eager to skyrocket your career, this is your golden ticket to shape a brand destined for stardom. Join our vibrant, fast-paced team and transform ideas into viral sensations—your contributions could be the spark that launches us into the spotlight! Role Highlights: Where Magic Meets Opportunity Position : Creative Marketing & Graphic Design Intern (Your Launchpad to Beauty Industry Fame!) Duration : 3-6 months (flexible scheduling to fit your life part-time or full-time vibes) Location : Mostly remote bliss (work from anywhere!), spiced up with inspiring in-person collabs at trendy Pune cafes once or twice a week—think brainstorming over lattes in the city's coolest spots Perks & Rewards : Attractive stipend (tailored to your skills and experience) + invaluable portfolio boosters, mentorship from beauty trailblazers, and a real shot at a full-time gig in our growing empire Your Epic Missions: Dive into Hands-On Excitement Branding and Design: Craft a jaw-dropping logo that embodies the soul of our hair perfume revolution—make it iconic and unforgettable! Elevate product packaging to luxurious heights, blending elegance, innovation, and irresistible allure that captivates our dream audience. Content Creation: Weave captivating stories through top-tier writing for our website, blog, and promo magic—your words will enchant and convert! Produce scroll-stopping social media posts, reels, and stories that explode on Instagram, TikTok, and Pinterest, building a loyal fanbase and viral momentum. Digital Marketing and SEO: Fuel game-changing strategies to amplify our brand's reach, turning clicks into devoted customers. Master SEO wizardry to dominate search results, supercharge visibility, and position us as the go-to in beauty trends. What You'll Bring: The Spark We're Craving Enrolled in or fresh out of a program in Marketing, Graphic Design, Communications, or something equally awesome. Wizard-level skills in tools like Adobe Creative Suite (Photoshop, Illustrator) or Canva to bring designs to life. A storytelling genius with writing that hooks, engages, and inspires—bonus if you've got a knack for audience vibes. Proven flair for social media sorcery, especially crafting addictive short-form videos that rack up views. Solid grasp of digital marketing essentials and SEO superpowers (think Google Analytics and keyword mastery). A bold, innovative mind with a burning passion for beauty—startup experience? Amazing, but your enthusiasm is the real MVP! Stellar communication, self-drive, and the ability to thrive in an exhilarating, ever-evolving scene. Why This Gig Will Change Your World Gain insider access to launching a brand from zero to hero—your work will be seen by thousands! Personalized guidance from passionate pros, plus a portfolio that'll dazzle future employers. Ultimate flexibility to juggle studies, side hustles, or whatever fuels you. A buzzing, collaborative vibe where your wildest ideas aren't just heard—they're celebrated and executed! Ready to infuse your creativity into the next big thing in beauty and leave your mark on the hair perfume universe? This isn't just an internship—it's your stage to shine! Shoot us your resume, a standout portfolio (designs, writings, or social gems), and a quick pitch on why you're our perfect match at sereluxgemora@gmail.com. Spots are filling fast—seize the moment and apply today!

Posted 2 days ago

Apply

3.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Description Would you like to build highly available, scalable and distributed engineering systems for one of the largest data lakes in Amazon? Does Petabyte scale excite you? The Spektr Datalake team owns the central data lake for Advertising unifying Petabytes of data generated across the Ads pipeline such as campaigns, ad-serving, billing, clicks, impressions and more and into a single scalable repository. This is used across the organization to drive hundreds of thousands of complex queries for analysis, measurement and reporting decisions for our customers. The data lake enables customers such as data engineers, business analysts, ML engineers, research scientists, economists and data experts to collect what they need via world-class self-service tools. Spektr Datalake team is building the next version of its data lake for 5x growth. An SDE on the ADM team has a unique opportunity to design and innovate solutions for this scale, delivering robust and scalable microservices built over Java and AWS as well as innovate with big data technologies like Spark, EMR, Athena and more. You will create value that materially impacts the speed and quality of decision making across the organization resulting in tangible business growth. Key job responsibilities Engage with key decision makers such as Product & Program Managers to understand customer requirements and brainstorm on solutions Design, code and deploy components and micro-services for the core job management pipeline Ensure testability, maintanability and low operational footprint for your code Participate in operational responsibilities with your team Innovate on AWS technology to improve latency, reduce cost and operations A day in the life Focus on core engineering opportunities to guarantee system availability that matches our data growth Work with a skilled team of engineers, managers and decision makers to consistently meet customer demand Automate monitoring of data availability, quality and usability via simplified metrics and drive innovations to improve guarantees for our customers Build frugal solutions that will help make Spektr data lake cost wise leanest datalake in Amazon About The Team The mission of the Spektr Datalake team is to provide data that helps the advertising organization make informed analyses and decisions for our customers and to determine and deploy investments for future growth via a set of central and unified products and services. Our team focuses on simplicity, usability, speed, compliance, cost efficiency and enabling high-velocity decision making so our customers can generate high quality insights faster. We are a global team with presence across IN and NA. Basic Qualifications 3+ years of non-internship professional software development experience 2+ years of non-internship design or architecture (design patterns, reliability and scaling) of new and existing systems experience Experience programming with at least one software programming language Preferred Qualifications 3+ years of full software development life cycle, including coding standards, code reviews, source control management, build processes, testing, and operations experience Bachelor's degree in computer science or equivalent Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner. Company - ADCI - Karnataka Job ID: A2990168

Posted 2 days ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies