Jobs
Interviews

15352 Spark Jobs - Page 29

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

10.0 years

10 Lacs

Hyderābād

On-site

To get the best candidate experience, please consider applying for a maximum of 3 roles within 12 months to ensure you are not duplicating efforts. Job Category Software Engineering Job Details About Salesforce We’re Salesforce, the Customer Company, inspiring the future of business with AI+ Data +CRM. Leading with our core values, we help companies across every industry blaze new trails and connect with customers in a whole new way. And, we empower you to be a Trailblazer, too — driving your performance and career growth, charting new paths, and improving the state of the world. If you believe in business as the greatest platform for change and in companies doing well and doing good – you’ve come to the right place. We’re building a product data platform to bring Salesforce’s product signals into the agentic era — powering smarter, adaptive, and self-optimizing product experiences. As a Senior Manager , you’ll lead a team of talented engineers in designing and building trusted, scalable systems that capture, process, and surface rich product signals for use across analytics, AI/ML, and customer-facing features. You’ll guide architectural decisions, drive cross-functional alignment, and shape strategy around semantic layers, knowledge graphs, and metrics frameworks that help teams publish and consume meaningful insights with ease. We’re looking for a strategic, systems-minded leader who thrives in ambiguity, excels at cross-org collaboration, and has a strong technical foundation to drive business and product impact. What You’ll Do Lead and grow a high-performing engineering team focused on batch and streaming data pipelines using technologies like Spark, Trino, Flink, and DBT Define and drive the vision for intuitive, scalable metrics frameworks and a robust semantic signal layer Partner closely with product, analytics, and engineering stakeholders to align schemas, models, and data usage patterns across the org Set engineering direction and best practices for building reliable, observable, and testable data systems Mentor and guide engineers in both technical execution and career development Contribute to long-term strategy around data governance, AI-readiness, and intelligent system design Serve as a thought leader and connector across domains to ensure data products deliver clear, trusted value What We’re Looking For 10+ years of experience in data engineering or backend systems, with at least 2+ years in technical leadership or management roles Strong hands-on technical background, with deep experience in big data frameworks (e.g., Spark, Trino/Presto, DBT) Familiarity with streaming technologies such as Flink or Kafka Solid understanding of semantic layers, data modeling, and metrics systems Proven success leading teams that build data products or platforms at scale Experience with cloud infrastructure (especially AWS — S3, EMR, ECS, IAM) Exposure to modern metadata platforms, Snowflake, or knowledge graphs is a plus Excellent communication and stakeholder management skills A strategic, pragmatic thinker who is comfortable making high-impact decisions amid complexity Why Join Us This is your opportunity to shape how Salesforce understands and uses its product data. You’ll be at the forefront of transforming raw product signals into intelligent, actionable insights — powering everything from internal decision-making to next-generation AI agents. If you're excited by the challenge of leading high-impact teams and building trusted systems at scale, we'd love to talk to you. Accommodations If you require assistance due to a disability applying for open positions please submit a request via this Accommodations Request Form . Posting Statement Salesforce is an equal opportunity employer and maintains a policy of non-discrimination with all employees and applicants for employment. What does that mean exactly? It means that at Salesforce, we believe in equality for all. And we believe we can lead the path to equality in part by creating a workplace that’s inclusive, and free from discrimination. Know your rights: workplace discrimination is illegal. Any employee or potential employee will be assessed on the basis of merit, competence and qualifications – without regard to race, religion, color, national origin, sex, sexual orientation, gender expression or identity, transgender status, age, disability, veteran or marital status, political viewpoint, or other classifications protected by law. This policy applies to current and prospective employees, no matter where they are in their Salesforce employment journey. It also applies to recruiting, hiring, job assignment, compensation, promotion, benefits, training, assessment of job performance, discipline, termination, and everything in between. Recruiting, hiring, and promotion decisions at Salesforce are fair and based on merit. The same goes for compensation, benefits, promotions, transfers, reduction in workforce, recall, training, and education.

Posted 2 days ago

Apply

5.0 years

6 - 10 Lacs

Hyderābād

On-site

At EY, we’re all in to shape your future with confidence. We’ll help you succeed in a globally connected powerhouse of diverse teams and take your career wherever you want it to go. Join EY and help to build a better working world. Job Title: AWS Senior Data Engineer Experience Required: Minimum 5+ years Job Summary: We are seeking a skilled Data Engineer with a strong background in data ingestion, processing, and storage. The ideal candidate will have experience working with various data sources and technologies, particularly in a cloud environment. You will be responsible for designing and implementing data pipelines, ensuring data quality, and optimizing data storage solutions. Key Responsibilities: Design, develop, and maintain scalable data pipelines for data ingestion and processing using Python, Spark, and AWS services. Work with on-prem Oracle databases, batch files, and Confluent Kafka for data sourcing. Implement and manage ETL processes using AWS Glue and EMR for batch and streaming data. Develop and maintain data storage solutions using Medallion Architecture in S3, Redshift, and Oracle. Collaborate with cross-functional teams to understand data requirements and deliver solutions that meet business needs. Monitor and optimize data workflows using Airflow and other orchestration tools. Ensure data quality and integrity throughout the data lifecycle. Implement CI/CD practices for data pipeline deployment using Terraform and other tools. Utilize monitoring and logging tools such as CloudWatch, Datadog, and Splunk to ensure system reliability and performance. Communicate effectively with stakeholders to gather requirements and provide updates on project status. Technical Skills Required: Proficient in Python for data processing and automation. Strong experience with Apache Spark for large-scale data processing. Familiarity with AWS S3 for data storage and management. Experience with Kafka for real-time data streaming. Knowledge of Redshift for data warehousing solutions. Proficient in Oracle databases for data management. Experience with AWS Glue for ETL processes. Familiarity with Apache Airflow for workflow orchestration. Experience with EMR for big data processing. Mandatory: Strong AWS data engineering skills. Good Additional Skills: Familiarity with Terraform for infrastructure as code. Experience with messaging services such as SNS and SQS. Knowledge of monitoring and logging tools like CloudWatch, Datadog, and Splunk. Experience with AWS DataSync, DMS, Athena, and Lake Formation. Communication Skills: Excellent verbal and written communication skills are mandatory for effective collaboration with team members and stakeholders. EY | Building a better working world EY is building a better working world by creating new value for clients, people, society and the planet, while building trust in capital markets. Enabled by data, AI and advanced technology, EY teams help clients shape the future with confidence and develop answers for the most pressing issues of today and tomorrow. EY teams work across a full spectrum of services in assurance, consulting, tax, strategy and transactions. Fueled by sector insights, a globally connected, multi-disciplinary network and diverse ecosystem partners, EY teams can provide services in more than 150 countries and territories.

Posted 2 days ago

Apply

2.0 - 3.0 years

0 Lacs

Telangana

On-site

About Chubb Chubb is a world leader in insurance. With operations in 54 countries and territories, Chubb provides commercial and personal property and casualty insurance, personal accident and supplemental health insurance, reinsurance and life insurance to a diverse group of clients. The company is defined by its extensive product and service offerings, broad distribution capabilities, exceptional financial strength and local operations globally. Parent company Chubb Limited is listed on the New York Stock Exchange (NYSE: CB) and is a component of the S&P 500 index. Chubb employs approximately 40,000 people worldwide. Additional information can be found at: www.chubb.com . About Chubb India At Chubb India, we are on an exciting journey of digital transformation driven by a commitment to engineering excellence and analytics. We are proud to share that we have been officially certified as a Great Place to Work® for the third consecutive year, a reflection of the culture at Chubb where we believe in fostering an environment where everyone can thrive, innovate, and grow With a team of over 2500 talented professionals, we encourage a start-up mindset that promotes collaboration, diverse perspectives, and a solution-driven attitude. We are dedicated to building expertise in engineering, analytics, and automation, empowering our teams to excel in a dynamic digital landscape. We offer an environment where you will be part of an organization that is dedicated to solving real-world challenges in the insurance industry. Together, we will work to shape the future through innovation and continuous learning. Role : ML Engineer (Associate / Senior) Experience : 2-3 Years (Associate) 4-5 Years (Senior) Mandatory Skill: Python/MLOps/Docker and Kubernetes/FastAPI or Flask/CICD/Jenkins/Spark/SQL/RDB/Cosmos/Kafka/ADLS/API/Databricks Location: Bangalore Notice Period: less than 60 Days Job Description: Other Skills: Azure/LLMOps/ADF/ETL We are seeking a talented and passionate Machine Learning Engineer to join our team and play a pivotal role in developing and deploying cutting-edge machine learning solutions. You will work closely with other engineers and data scientists to bring machine learning models from proof-of-concept to production, ensuring they deliver real-world impact and solve critical business challenges. Collaborate with data scientists, model developers, software engineers, and other stakeholders to translate business needs into technical solutions. Experience of having deployed ML models to production Create high performance real-time inferencing APIs and batch inferencing pipelines to serve ML models to stakeholders. Integrate machine learning models seamlessly into existing production systems. Continuously monitor and evaluate model performance and retrain the models automatically or periodically Streamline existing ML pipelines to increase throughput. Identify and address security vulnerabilities in existing applications proactively. Design, develop, and implement machine learning models for preferably insurance related applications. Well versed with Azure ecosystem Knowledge of NLP and Generative AI techniques. Relevant experience will be a plus. Knowledge of machine learning algorithms and libraries (e.g., TensorFlow, PyTorch) will be a plus. Stay up-to-date on the latest advancements in machine learning and contribute to ongoing innovation within the team. Why Chubb? Join Chubb to be part of a leading global insurance company! Our constant focus on employee experience along with a start-up-like culture empowers you to achieve impactful results. Industry leader: Chubb is a world leader in the insurance industry, powered by underwriting and engineering excellence A Great Place to work: Chubb India has been recognized as a Great Place to Work® for the years 2023-2024, 2024-2025 and 2025-2026 Laser focus on excellence: At Chubb we pride ourselves on our culture of greatness where excellence is a mindset and a way of being. We constantly seek new and innovative ways to excel at work and deliver outstanding results Start-Up Culture: Embracing the spirit of a start-up, our focus on speed and agility enables us to respond swiftly to market requirements, while a culture of ownership empowers employees to drive results that matter Growth and success: As we continue to grow, we are steadfast in our commitment to provide our employees with the best work experience, enabling them to advance their careers in a conducive environment Employee Benefits Our company offers a comprehensive benefits package designed to support our employees’ health, well-being, and professional growth. Employees enjoy flexible work options, generous paid time off, and robust health coverage, including treatment for dental and vision related requirements. We invest in the future of our employees through continuous learning opportunities and career advancement programs, while fostering a supportive and inclusive work environment. Our benefits include: Savings and Investment plans: We provide specialized benefits like Corporate NPS (National Pension Scheme), Employee Stock Purchase Plan (ESPP), Long-Term Incentive Plan (LTIP), Retiral Benefits and Car Lease that help employees optimally plan their finances Upskilling and career growth opportunities: With a focus on continuous learning, we offer customized programs that support upskilling like Education Reimbursement Programs, Certification programs and access to global learning programs. Health and Welfare Benefits: We care about our employees’ well-being in and out of work and have benefits like Employee Assistance Program (EAP), Yearly Free Health campaigns and comprehensive Insurance benefits. Application Process Our recruitment process is designed to be transparent, and inclusive. Step 1: Submit your application via the Chubb Careers Portal. Step 2: Engage with our recruitment team for an initial discussion. Step 3: Participate in HackerRank assessments/technical/functional interviews and assessments (if applicable). Step 4: Final interaction with Chubb leadership. Join Us With you Chubb is better. Whether you are solving challenges on a global stage or creating innovative solutions for local markets, your contributions will help shape the future. If you value integrity, innovation, and inclusion, and are ready to make a difference, we invite you to be part of Chubb India’s journey. Apply Now: Chubb External Careers

Posted 2 days ago

Apply

0 years

2 - 5 Lacs

Hyderābād

On-site

At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. Key Responsibilities Develop, deploy, and monitor machine learning models in production environments. Automate ML pipelines for model training, validation, and deployment. Optimize ML model performance, scalability, and cost efficiency. Implement CI/CD workflows for ML model versioning, testing, and deployment. Manage and optimize data processing workflows for structured and unstructured data. Design, build, and maintain scalable ML infrastructure on cloud platforms. Implement monitoring, logging, and alerting solutions for model performance tracking. Collaborate with data scientists, software engineers, and DevOps teams to integrate ML models into business applications. Ensure compliance with best practices for security, data privacy, and governance. Stay updated with the latest trends in MLOps, AI, and cloud technologies. Mandatory Skills Technical Skills: Programming Languages: Proficiency in Python (3.x) and SQL. ML Frameworks & Libraries: Extensive knowledge of ML frameworks (TensorFlow, PyTorch, Scikit-learn), data structures, data modeling, and software architecture. Databases: Experience with SQL (PostgreSQL, MySQL) and NoSQL (MongoDB, Cassandra, DynamoDB) databases. Mathematics & Algorithms: Strong understanding of mathematics, statistics, and algorithms for machine learning applications. ML Modules & REST API: Experience in developing and integrating ML modules with RESTful APIs. Version Control: Hands-on experience with Git and best practices for version control. Model Deployment & Monitoring: Experience in deploying and monitoring ML models using:MLflow (for model tracking, versioning, and deployment) WhyLabs (for model monitoring and data drift detection) Kubeflow (for orchestrating ML workflows) Airflow (for managing ML pipelines) Docker & Kubernetes (for containerization and orchestration) Prometheus & Grafana (for logging and real-time monitoring) Data Processing: Ability to process and transform unstructured data into meaningful insights (e.g., auto-tagging images, text-to-speech conversions). Preferred Cloud & Infrastructure Skills: Experience with cloud platforms : Knowledge of AWS Lambda, AWS API Gateway, AWS Glue, Athena, S3 and Iceberg and Azure AI Studio for model hosting, GPU/TPU usage, and scalable infrastructure. Hands-on with Infrastructure as Code (Terraform, CloudFormation) for cloud automation. Experience on CI/CD pipelines: Experience integrating ML models into continuous integration/continuous delivery workflows. We use Git based CI/CD methods mostly. Experience with feature stores (Feast, Tecton) for managing ML features. Knowledge of big data processing tools (Spark, Hadoop, Dask, Apache Beam). EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today.

Posted 2 days ago

Apply

8.0 years

12 - 24 Lacs

Hyderābād

On-site

Senior Java / Spark Developer. Required Skills & Experience: 8 Years of Java Experience Strong Spring boot experience Experience using Java-spark writing complex SQL, PL/SQL queries around Data Analysis, Data Lineage & Reconciliation preferably in PostgreSQL/Oracle Experience of creating Data Lineage documents, Source to Target (STM) Mapping documents and Low-Level Technical Specification Documents Experience of design and implementation of ETL/ELT framework for complex warehouses/data marts. Hands on development mentality, with a willingness to troubleshoot and solve complex problems. Desired Skills/Experience: Life Insurance Industry Experience a huge plus Keen ability to prioritize and handle multiple assignments Experience working in an On-site/Off-site Development Model Job Type: Full-time Pay: ₹100,000.00 - ₹200,000.00 per month Experience: Java: 8 years (Required) spark: 8 years (Required) Spring Boot: 8 years (Required) Work Location: In person

Posted 2 days ago

Apply

0 years

1 - 1 Lacs

Mohali

On-site

About the Role We are looking for a passionate Data Science fresher who has completed at least 6 months of practical training, internship, or project experience in the data science field. This is an exciting opportunity to apply your analytical and problem-solving skills to real-world datasets while working closely with experienced data scientists and engineers. Key Responsibilities Assist in data collection, cleaning, and preprocessing from various sources. Support the team in building, evaluating, and optimizing ML models . Perform exploratory data analysis (EDA) to derive insights and patterns. Work on data visualization dashboards and reports using tools like Power BI, Tableau, or Matplotlib/Seaborn. Collaborate with senior data scientists and domain experts on ongoing projects. Document findings, code, and models in a structured manner. Continuously learn and adopt new techniques, tools, and frameworks. Required Skills & Qualifications Education: Bachelor’s degree in Computer Science, Statistics, Mathematics, Engineering, or a related field. Experience: Minimum 6 months internship/training in data science, analytics, or machine learning. Technical Skills: Proficiency in Python (Pandas, NumPy, Scikit-learn, etc.). Understanding of machine learning algorithms (supervised/unsupervised). Knowledge of SQL and database concepts. Familiarity with data visualization tools/libraries. Basic understanding of statistics and probability. Soft Skills: Strong analytical thinking and problem-solving ability. Good communication and teamwork skills. Eagerness to learn and grow in a dynamic environment. Good to Have (Optional) Exposure to cloud platforms (AWS, GCP, Azure). Experience with big data tools (Spark, Hadoop). Knowledge of deep learning frameworks (TensorFlow, PyTorch). What We Offer Opportunity to work on real-world data science projects . Mentorship from experienced professionals in the field. A collaborative, innovative, and supportive work environment. Growth path to become a full-time Data Scientist with us. Job Types: Full-time, Permanent, Fresher Pay: ₹10,000.00 - ₹15,000.00 per month Benefits: Health insurance Schedule: Day shift Fixed shift Monday to Friday Application Question(s): have you done your 6 month training ? Education: Bachelor's (Preferred) Language: English (Preferred) Work Location: In person

Posted 2 days ago

Apply

5.0 - 6.0 years

8 - 15 Lacs

India

On-site

We are seeking a highly skilled Python Developer with expertise in Machine Learning and Data Analytics to join our team. The ideal candidate should have 5-6 years of experience in developing end-to-end ML-driven applications and handling data-driven projects independently. You will be responsible for designing, developing, and deploying Python-based applications that leverage data analytics, statistical modeling, and machine learning techniques. Key Responsibilities: Design, develop, and deploy Python applications for data analytics and machine learning. Work independently on machine learning model development, evaluation, and optimization. Develop ETL pipelines and process large-scale datasets for analysis. Implement scalable and efficient algorithms for predictive analytics and automation. Optimize code for performance, scalability, and maintainability. Collaborate with stakeholders to understand business requirements and translate them into technical solutions. Integrate APIs and third-party tools to enhance functionality. Document processes, code, and best practices for maintainability. Required Skills & Qualifications: 5-6 years of professional experience in Python application development. Strong expertise in Machine Learning, Data Analytics, and AI frameworks (TensorFlow, PyTorch, Scikit-learn, etc.). Proficiency in Python libraries such as Pandas, NumPy, SciPy, and Matplotlib. Experience with SQL and NoSQL databases (PostgreSQL, MongoDB, etc.). Hands-on experience with big data technologies (Apache Spark, Delta Lake, Hadoop, etc.). Strong experience in developing APIs and microservices using FastAPI, Flask, or Django. Good understanding of data structures, algorithms, and software development best practices. Strong problem-solving and debugging skills. Ability to work independently and handle multiple projects simultaneously. Good to have - Working knowledge of cloud platforms (Azure/AWS/GCP) for deploying ML models and data applications. Job Type: Full-time Pay: ₹800,000.00 - ₹1,500,000.00 per year Schedule: Day shift Experience: Python: 5 years (Required) Work Location: In person Expected Start Date: 01/08/2025

Posted 2 days ago

Apply

7.0 years

1 - 9 Lacs

Bengaluru

On-site

Organization: At CommBank, we never lose sight of the role we play in other people’s financial wellbeing. Our focus is to help people and businesses move forward to progress. To make the right financial decisions and achieve their dreams, targets, and aspirations. Regardless of where you work within our organisation, your initiative, talent, ideas, and energy all contribute to the impact that we can make with our work. Together we can achieve great things. Job Title: Senior Software Engineer – Data Modernization (GenAI) Location: Manyata Tech Park, Bangalore (Hybrid) Business & Team: CommSec is Australia's largest online retail stockbroker. It is one of the most highly visible and visited online assets in Australian financial services. CommSec’s systems utilise a variety of technologies and support a broad range of investors. Engineers within CommSec are offered regular opportunities to work on some of the finest IT systems in Australia, as well as having opportunity to develop careers across different functions and teams within the wider Bank. Impact & Contribution: Apply core concepts, technology and domain expertise to effectively develop software solutions to meet business needs. You will contribute to building the brighter future for all by ensuring that our team builds the best solutions possible using modern development practices that ensure both functional and non-functional needs are met. If you have a history of building a culture of empowerment and know what it takes to be a force multiplier within a large organization, then you’re the kind of person we are looking for. You will report to the Lead Engineer within Business Banking Technology. Roles & Responsibilities: Build scalable agentic AI solutions that integrate with existing systems and support business objectives. Implement MLOps pipelines Design and conduct experiments to evaluate model performance and iteratively refine models based on findings. Hands on experience in automated LLM outcome validation and metrication of AI outputs. Good knowledge of ethical AI practices and tools to implement. Hand-on experience in AWS cloud services such as SNS, SQS, Lambda. Experience in big data platform technologies such as to Spark framework and Vector DB. Collaborate with Software engineers to deploy AI models in production environments, ensuring robustness and scalability. Participate in research initiatives to explore new AI models and methodologies that can be applied to current and future products. Develop and implement monitoring systems to track the performance of AI models in production. Hands on DevSecOps experience including continuous integration/continuous deployment, security practices. Essential Skills: The AI Engineer will involve in the development and deployment of advanced AI and machine learning models. The ideal candidate is highly skilled in MLOps and software engineering, with a strong track record of developing AI models and deploying them in production environments. 7+ years' experience RAG, Prompt Engineering Vector DB, Dynamo DB, Redshift Spark framework, Parquet, Iceberg Python MLOps Langfuse, LlamaIndex, MLflow, Gleu, Bleu AWS cloud services such as SNS, SQS, Lambda Traditional Machine Learning Education Qualifications: Bachelor’s degree or Master's Degree in engineering in Information Technology. If you're already part of the Commonwealth Bank Group (including Bankwest, x15ventures), you'll need to apply through Sidekick to submit a valid application. We’re keen to support you with the next step in your career. We're aware of some accessibility issues on this site, particularly for screen reader users. We want to make finding your dream job as easy as possible, so if you require additional support please contact HR Direct on 1800 989 696. Advertising End Date: 06/08/2025

Posted 2 days ago

Apply

0.0 years

0 Lacs

Bengaluru

On-site

Data Engineer -1 (Experience – 0-2 years) What we offer Our mission is simple – Building trust. Our customer's trust in us is not merely about the safety of their assets but also about how dependable our digital offerings are. That’s why, we at Kotak Group are dedicated to transforming banking by imbibing a technology-first approach in everything we do, with an aim to enhance customer experience by providing superior banking services. We welcome and invite the best technological minds in the country to come join us in our mission to make banking seamless and swift. Here, we promise you meaningful work that positively impacts the lives of many. About our team DEX is a central data org for Kotak Bank which manages entire data experience of Kotak Bank. DEX stands for Kotak’s Data Exchange. This org comprises of Data Platform, Data Engineering and Data Governance charter. The org sits closely with Analytics org. DEX is primarily working on greenfield project to revamp entire data platform which is on premise solutions to scalable AWS cloud-based platform. The team is being built ground up which provides great opportunities to technology fellows to build things from scratch and build one of the best-in-class data lake house solutions. The primary skills this team should encompass are Software development skills preferably Python for platform building on AWS; Data engineering Spark (pyspark, sparksql, scala) for ETL development, Advanced SQL and Data modelling for Analytics. The org size is expected to be around 100+ member team primarily based out of Bangalore comprising of ~10 sub teams independently driving their charter. As a member of this team, you get opportunity to learn fintech space which is most sought-after domain in current world, be a early member in digital transformation journey of Kotak, learn and leverage technology to build complex data data platform solutions including, real time, micro batch, batch and analytics solutions in a programmatic way and also be futuristic to build systems which can be operated by machines using AI technologies. The data platform org is divided into 3 key verticals: Data Platform This Vertical is responsible for building data platform which includes optimized storage for entire bank and building centralized data lake, managed compute and orchestrations framework including concepts of serverless data solutions, managing central data warehouse for extremely high concurrency use cases, building connectors for different sources, building customer feature repository, build cost optimization solutions like EMR optimizers, perform automations and build observability capabilities for Kotak’s data platform. The team will also be center for Data Engineering excellence driving trainings and knowledge sharing sessions with large data consumer base within Kotak. Data Engineering This team will own data pipelines for thousands of datasets, be skilled to source data from 100+ source systems and enable data consumptions for 30+ data analytics products. The team will learn and built data models in a config based and programmatic and think big to build one of the most leveraged data model for financial orgs. This team will also enable centralized reporting for Kotak Bank which cuts across multiple products and dimensions. Additionally, the data build by this team will be consumed by 20K + branch consumers, RMs, Branch Managers and all analytics usecases. Data Governance The team will be central data governance team for Kotak bank managing Metadata platforms, Data Privacy, Data Security, Data Stewardship and Data Quality platform. If you’ve right data skills and are ready for building data lake solutions from scratch for high concurrency systems involving multiple systems then this is the team for you. You day to day role will include Drive business decisions with technical input and lead the team. Design, implement, and support an data infrastructure from scratch. Manage AWS resources, including EC2, EMR, S3, Glue, Redshift, and MWAA. Extract, transform, and load data from various sources using SQL and AWS big data technologies. Explore and learn the latest AWS technologies to enhance capabilities and efficiency. Collaborate with data scientists and BI engineers to adopt best practices in reporting and analysis. Improve ongoing reporting and analysis processes, automating or simplifying self-service support for customers. Build data platforms, data pipelines, or data management and governance tools. BASIC QUALIFICATIONS for Data Engineer/ SDE in Data Bachelor's degree in Computer Science, Engineering, or a related field Experience in data engineering Strong understanding of AWS technologies, including S3, Redshift, Glue, and EMR Experience with data pipeline tools such as Airflow and Spark Experience with data modeling and data quality best practices Excellent problem-solving and analytical skills Strong communication and teamwork skills Experience in at least one modern scripting or programming language, such as Python, Java, or Scala Strong advanced SQL skills PREFERRED QUALIFICATIONS AWS cloud technologies: Redshift, S3, Glue, EMR, Kinesis, Firehose, Lambda, IAM, Airflow Prior experience in Indian Banking segment and/or Fintech is desired. Experience with Non-relational databases and data stores Building and operating highly available, distributed data processing systems for large datasets Professional software engineering and best practices for the full software development life cycle Designing, developing, and implementing different types of data warehousing layers Leading the design, implementation, and successful delivery of large-scale, critical, or complex data solutions Building scalable data infrastructure and understanding distributed systems concepts SQL, ETL, and data modelling Ensuring the accuracy and availability of data to customers Proficient in at least one scripting or programming language for handling large volume data processing Strong presentation and communications skills.

Posted 2 days ago

Apply

8.0 years

5 - 10 Lacs

Bengaluru

On-site

We help the world run better At SAP, we enable you to bring out your best. Our company culture is focused on collaboration and a shared passion to help the world run better. How? We focus every day on building the foundation for tomorrow and creating a workplace that embraces differences, values flexibility, and is aligned to our purpose-driven and future-focused work. We offer a highly collaborative, caring team environment with a strong focus on learning and development, recognition for your individual contributions, and a variety of benefit options for you to choose from. What you'll do: We are looking for a Senior Software Engineer – Java to join and strengthen the App2App Integration team within SAP Business Data Cloud. This role is designed to accelerate the integration of SAP’s application ecosystem with its unified data fabric, enabling low-latency, secure and scalable data exchange. You will take ownership of designing and building core integration frameworks that enable real-time, event-driven data flows between distributed SAP systems. As a senior contributor, you will work closely with architects to drive the evolution of SAP’s App2App integration capabilities, with hands-on involvement in Java, ETL and distributed data processing, Apache Kafka, DevOps, SAP BTP and Hyperscaler platforms. Responsibilities: Design and develop App2App integration components and services using Java, RESTful APIs and messaging frameworks such as Apache Kafka. Build and maintain scalable data processing and ETL pipelines that support real-time and batch data flows. Integrate data engineering workflows with tools such as Databricks, Spark or other cloud-based processing platforms (experience with Databricks is a strong advantage). Accelerate the App2App integration roadmap by identifying reusable patterns, driving platform automation and establishing best practices. Collaborate with cross-functional teams to enable secure, reliable and performant communication across SAP applications. Build and maintain distributed data processing pipelines, supporting large-scale data ingestion, transformation and routing. Work closely with DevOps to define and improve CI/CD pipelines, monitoring and deployment strategies using modern GitOps practices. Guide cloud-native secure deployment of services on SAP BTP and major Hyperscaler (AWS, Azure, GCP). Collaborate with SAP’s broader Data Platform efforts including Datasphere, SAP Analytics Cloud and BDC runtime architecture What you bring: Bachelor’s or Master’s degree in Computer Science, Software Engineering or a related field. 8+ years of hands-on experience in backend development using Java, with strong object-oriented design and integration patterns. Hands-on experience building ETL pipelines and working with large-scale data processing frameworks. Experience or experimentation with tools such as Databricks, Apache Spark or other cloud-native data platforms is highly advantageous. Familiarity with SAP Business Technology Platform (BTP), SAP Datasphere, SAP Analytics Cloud or HANA is highly desirable. Design CI/CD pipelines, containerization (Docker), Kubernetes and DevOps best practices. Working knowledge of Hyperscaler environments such as AWS, Azure or GCP. Passionate about clean code, automated testing, performance tuning and continuous improvement. Strong communication skills and ability to collaborate with global teams across time zones Meet your Team: SAP is the market leader in enterprise application software, helping companies of all sizes and industries run at their best. As part of the Business Data Cloud (BDC) organization, the Foundation Services team is pivotal to SAP’s Data & AI strategy, delivering next-generation data experiences that power intelligence across the enterprise. Located in Bangalore, India, our team drives cutting-edge engineering efforts in a collaborative, inclusive and high-impact environment, enabling innovation and integration across SAP’s data platforms #DevT3 Bring out your best SAP innovations help more than four hundred thousand customers worldwide work together more efficiently and use business insight more effectively. Originally known for leadership in enterprise resource planning (ERP) software, SAP has evolved to become a market leader in end-to-end business application software and related services for database, analytics, intelligent technologies, and experience management. As a cloud company with two hundred million users and more than one hundred thousand employees worldwide, we are purpose-driven and future-focused, with a highly collaborative team ethic and commitment to personal development. Whether connecting global industries, people, or platforms, we help ensure every challenge gets the solution it deserves. At SAP, you can bring out your best. We win with inclusion SAP’s culture of inclusion, focus on health and well-being, and flexible working models help ensure that everyone – regardless of background – feels included and can run at their best. At SAP, we believe we are made stronger by the unique capabilities and qualities that each person brings to our company, and we invest in our employees to inspire confidence and help everyone realize their full potential. We ultimately believe in unleashing all talent and creating a better and more equitable world. SAP is proud to be an equal opportunity workplace and is an affirmative action employer. We are committed to the values of Equal Employment Opportunity and provide accessibility accommodations to applicants with physical and/or mental disabilities. If you are interested in applying for employment with SAP and are in need of accommodation or special assistance to navigate our website or to complete your application, please send an e-mail with your request to Recruiting Operations Team: Careers@sap.com For SAP employees: Only permanent roles are eligible for the SAP Employee Referral Program, according to the eligibility rules set in the SAP Referral Policy. Specific conditions may apply for roles in Vocational Training. EOE AA M/F/Vet/Disability: Qualified applicants will receive consideration for employment without regard to their age, race, religion, national origin, ethnicity, age, gender (including pregnancy, childbirth, et al), sexual orientation, gender identity or expression, protected veteran status, or disability. Successful candidates might be required to undergo a background verification with an external vendor. Requisition ID: 426958 | Work Area: Software-Design and Development | Expected Travel: 0 - 10% | Career Status: Professional | Employment Type: Regular Full Time | Additional Locations: #LI-Hybrid.

Posted 2 days ago

Apply

5.0 - 7.0 years

4 - 10 Lacs

Bengaluru

On-site

5 - 7 Years 1 Opening Bengaluru Role description Job Title: Java Spark Developer Experience: 5 to 7 Years Location: Bangalore Job Summary: We are seeking a skilled Java Spark Developer to design, develop, and maintain big data applications leveraging Apache Spark and Java. The ideal candidate will have a strong background in Core Java, experience with data frames and Spark-SQL, and a solid understanding of relational databases and orchestration frameworks. Primary Responsibilities: Design, develop, and maintain Java-based big data applications using Apache Spark. Work extensively with data frames to process and analyze large datasets. Integrate and manage relational databases such as MySQL, PostgreSQL, or Oracle. Utilize orchestration frameworks to automate and manage data workflows. Collaborate with data engineers to define and implement robust data processing pipelines. Write clean, maintainable, and efficient code following best practices. Conduct code reviews and provide constructive feedback to peers. Troubleshoot performance issues and resolve software defects promptly. Stay up-to-date with the latest trends and technologies in Java development and big data. Required Skills & Qualifications: Strong proficiency in Core Java . 5 to 7 years of hands-on experience with Apache Spark , including Spark DataFrames and Spark-SQL . Experience with relational databases (e.g., Db2, PostgreSQL, MySQL). Familiarity with orchestration frameworks for managing data workflows. Solid understanding of big data processing and analytics . Excellent problem-solving skills and keen attention to detail . Strong communication and team collaboration skills. Preferred Skills (Nice to Have): Familiarity with distributed file systems (e.g., HDFS). Experience with CI/CD tools like Jenkins, GitLab CI. Understanding of data warehousing concepts . Knowledge of Agile/Scrum methodologies . Why Join Us? Opportunity to work on cutting-edge big data projects. Collaborative and growth-focused work culture. Competitive compensation and benefits package. Skills Core Java,Apache spark,Relational Database About UST UST is a global digital transformation solutions provider. For more than 20 years, UST has worked side by side with the world’s best companies to make a real impact through transformation. Powered by technology, inspired by people and led by purpose, UST partners with their clients from design to operation. With deep domain expertise and a future-proof philosophy, UST embeds innovation and agility into their clients’ organizations. With over 30,000 employees in 30 countries, UST builds for boundless impact—touching billions of lives in the process.

Posted 2 days ago

Apply

0 years

0 Lacs

Bengaluru

On-site

Teamwork makes the stream work. Roku is changing how the world watches TV Roku is the #1 TV streaming platform in the U.S., Canada, and Mexico, and we've set our sights on powering every television in the world. Roku pioneered streaming to the TV. Our mission is to be the TV streaming platform that connects the entire TV ecosystem. We connect consumers to the content they love, enable content publishers to build and monetize large audiences, and provide advertisers unique capabilities to engage consumers. From your first day at Roku, you'll make a valuable - and valued - contribution. We're a fast-growing public company where no one is a bystander. We offer you the opportunity to delight millions of TV streamers around the world while gaining meaningful experience across a variety of disciplines. About the team Roku is the No. 1 TV streaming platform in the U.S., Canada, and Mexico with 70+ millions of active accounts. Roku pioneered streaming to the TV and continues to innovate and lead the industry. We believe Roku’s continued success relies on its investment in our machine learning/ML recommendation engine. Roku enables our users to access millions of contents including movies, episodes, news, sports, music and channels from all around the world. About the role We’re on a mission to build cutting-edge advertising technology that empowers businesses to run sustainable and highly-profitable campaigns. The Ad Performance team owns server technologies, data, and cloud services aimed at improving the ad experience. We're looking for seasoned engineers with a background in machine learning to aid in this mission. Examples of problems include improving ad relevance, inferring demographics, yield optimisation, and many more. Employees in this role are expected to apply knowledge of experimental methodologies, statistics, optimisation, probability theory, and machine learning using both general purpose software and statistical languages. What you’ll be doing ML infrastructure: Help build a first-class machine learning platform from the ground up which manages the entire model lifecycle - feature engineering, model training, versioning, deployment, online serving/evaluation, and monitoring prediction quality. Data analysis and feature engineering: Apply your expertise to identify and generate features that can be leveraged by multiple use cases and models. Model training with batch and real-time prediction scenarios: Use machine learning and statistical modelling techniques such as Decision Trees, Logistic Regression, Neural Networks, Bayesian Analysis and others to develop and evaluate algorithms for improving product/system performance, quality, and accuracy. Production operations: Low-level systems debugging, performance measurement, and optimisation on large production clusters. Collaboration with cross-functional teams: Partner with product managers, data scientists, and other engineers to deliver impactful solutions. Staying ahead of the curve: Continuously learn and adapt to emerging technologies and industry trends. We’re excited if you have Bachelors, Masters, or PhD in Computer Science, Statistics, or a related field. Experience in applied machine learning on real use cases (bonus points for ad tech-related use cases). Great coding skills and strong software development experience (we use Spark, Python, Java). Familiarity with real-time evaluation of models with low latency constraints. Familiarity with distributed ML frameworks such as Spark-MLlib, TensorFlow, etc. Ability to work with large scale computing frameworks, data analysis systems, and modelling environments. Examples include Spark, Hive, NoSQL stores such as Aerospike and ScyllaDB. Ad tech background is a plus. #LI-PS2 Benefits Roku is committed to offering a diverse range of benefits as part of our compensation package to support our employees and their families. Our comprehensive benefits include global access to mental health and financial wellness support and resources. Local benefits include statutory and voluntary benefits which may include healthcare (medical, dental, and vision), life, accident, disability, commuter, and retirement options (401(k)/pension). Our employees can take time off work for vacation and other personal reasons to balance their evolving work and life needs. It's important to note that not every benefit is available in all locations or for every role. For details specific to your location, please consult with your recruiter. The Roku Culture Roku is a great place for people who want to work in a fast-paced environment where everyone is focused on the company's success rather than their own. We try to surround ourselves with people who are great at their jobs, who are easy to work with, and who keep their egos in check. We appreciate a sense of humor. We believe a fewer number of very talented folks can do more for less cost than a larger number of less talented teams. We're independent thinkers with big ideas who act boldly, move fast and accomplish extraordinary things through collaboration and trust. In short, at Roku you'll be part of a company that's changing how the world watches TV. We have a unique culture that we are proud of. We think of ourselves primarily as problem-solvers, which itself is a two-part idea. We come up with the solution, but the solution isn't real until it is built and delivered to the customer. That penchant for action gives us a pragmatic approach to innovation, one that has served us well since 2002. To learn more about Roku, our global footprint, and how we've grown, visit https://www.weareroku.com/factsheet. By providing your information, you acknowledge that you have read our Applicant Privacy Notice and authorize Roku to process your data subject to those terms.

Posted 2 days ago

Apply

5.0 - 7.0 years

4 - 10 Lacs

Bengaluru

On-site

5 - 7 Years 1 Opening Bengaluru Role description Job Title: Data Engineer Experience Required: 5 to 7 Years Location: Bangalore Primary Responsibilities: Design, develop, and maintain scalable ETL pipelines for processing and transforming large datasets. Collaborate with data scientists, analysts , and other stakeholders to understand data requirements and deliver effective data solutions. Optimize and tune data processing workflows to improve performance and efficiency. Implement data quality checks to ensure integrity and consistency across data sources. Manage and maintain relational databases and data warehouses . Leverage cloud-based data platforms (e.g., Snowflake , Databricks ) for data storage and processing. Monitor and troubleshoot data pipelines to ensure reliability and minimize downtime. Create and maintain documentation for data engineering processes and best practices. Required Skills & Qualifications: 5 to 7 years of experience as a Data Engineer or in a similar role. Proficiency in Apache Spark for large-scale data processing. Strong programming skills in Python . Advanced knowledge of SQL for data querying and manipulation. Experience working with relational databases and building scalable ETL pipelines . Familiarity with cloud data platforms such as Snowflake or Databricks . Strong problem-solving skills and high attention to detail. Excellent communication and collaboration abilities. Desired Skills: Experience with big data technologies such as Kafka . Understanding of data modeling and data warehousing concepts. Familiarity with containerization and orchestration tools (e.g., Docker , Kubernetes ). Experience with version control systems like Git . Knowledge of data governance and compliance requirements . Skills Data Engineering,Apache spark,Python,SQL About UST UST is a global digital transformation solutions provider. For more than 20 years, UST has worked side by side with the world’s best companies to make a real impact through transformation. Powered by technology, inspired by people and led by purpose, UST partners with their clients from design to operation. With deep domain expertise and a future-proof philosophy, UST embeds innovation and agility into their clients’ organizations. With over 30,000 employees in 30 countries, UST builds for boundless impact—touching billions of lives in the process.

Posted 2 days ago

Apply

8.0 - 13.0 years

11 - 16 Lacs

Bengaluru

Work from Office

Essential Responsibilities: As a Senior Software Engineer, your responsibilities will include: Building, refining, tuning, and maintaining our real-time and batch data infrastructure Daily use technologies such as Python, Spark, Airflow, Snowflake, Hive, FastAPI, etc. Maintaining data quality and accuracy across production data systems Working with Data Analysts to develop ETL processes for analysis and reporting Working with Product Managers to design and build data products Working with our DevOps team to scale and optimize our data infrastructure Participate in architecture discussions, influence the road map, take ownership and responsibility over new projects Participating in on-call rotation in their respective time zones (be available by phone or email in case something goes wrong) Desired Characteristics: Minimum 8 years of software engineering experience. An undergraduate degree in Computer Science (or a related field) from a university where the primary language of instruction is English is strongly desired. 2+ Years of Experience/Fluency in Python Proficient with relational databases and Advanced SQL Expert in usage of services like Spark and Hive. Experience working with container-based solutions is a plus. Experience in adequate usage of any scheduler such as Apache Airflow, Apache Luigi, Chronos etc. Experience in adequate usage of cloud services (AWS) at scale Proven long term experience and enthusiasm for distributed data processing at scale, eagerness to learn new things. Expertise in designing and architecting distributed low latency and scalable solutions in either cloud and on-premises environment. Exposure to the whole software development lifecycle from inception to production and monitoring. Experience in Advertising Attribution domain is a plus Experience in agile software development processes Excellent interpersonal and communication skills

Posted 2 days ago

Apply

15.0 years

6 - 8 Lacs

Bengaluru

On-site

Company Description Bosch Global Software Technologies Private Limited is a 100% owned subsidiary of Robert Bosch GmbH, one of the world's leading global supplier of technology and services, offering end-to-end Engineering, IT and Business Solutions. With over 28,200+ associates, it’s the largest software development center of Bosch, outside Germany, indicating that it is the Technology Powerhouse of Bosch in India with a global footprint and presence in the US, Europe and the Asia Pacific region. Job Description Job Summary - Bosch Research is seeking a highly accomplished and technically authoritative Software Expert in AI/ML Architecture to define, evolve, and lead the technical foundations of enterprise-grade, AI-driven systems. This is a technical leadership role without people management responsibilities , intended for professionals with deep expertise in software architecture , AI/ML systems , and large-scale engineering applications and their end-to-end deliveries. You will own the architecture and technical delivery of complex software solutions—ensuring they are robust, scalable, and capable of serving diverse business domains and datasets. The ideal candidate demonstrates mastery in cloud-native engineering , MLOps , Azure ML , and the integration of AI Algorithms (Computer Vision, Text, Timeseries, ML, etc.), LLMs , Agentic AI , and other advanced AI capabilities into secure and high-performing software environments Roles & Responsibilities: Technical Architecture and Solution Ownership Define, evolve, and drive software architecture for AI-centric platforms across industrial and enterprise use cases. Architect for scalability, security, availability, and multi-domain adaptability , accommodating diverse data modalities and system constraints. Embed non-functional requirements (NFRs) —latency, throughput, fault tolerance, observability, security, and maintainability—into all architectural designs. Incorporate LLM , Agentic AI , and foundation model design patterns where appropriate, ensuring performance and operational compliance in real-world deployments. Enterprise Delivery and Vision Lead the translation of research and experimentation into production-grade solutions with measurable impact on business KPIs (both top-line growth and bottom-line efficiency). Perform deep-dive gap analysis in existing software and data pipelines and develop long-term architectural solutions and migration strategies. Build architectures that thrive under enterprise constraints , such as regulatory compliance, resource limits, multi-tenancy, and lifecycle governance. AI/ML Engineering and MLOps Design and implement scalable MLOps workflows , integrating CI/CD pipelines, experiment tracking, automated validation, and model retraining loops. Operationalize AI pipelines using Azure Machine Learning (Azure ML) services and ensure seamless collaboration with data science and platform teams. Ensure architectures accommodate responsible AI , model explainability, and observability layers. Software Quality and Engineering Discipline Champion software engineering best practices with rigorous attention to: Code quality through static/dynamic analysis and automated quality metrics Code reviews , pair programming, and technical design documentation Unit, integration, and system testing , backed by frameworks like pytest, unit test, or Robot Framework Code quality tools such as SonarQube, CodeQL, or similar Drive the culture of traceability, testability, and reliability , embedding quality gates into the development lifecycle. Own the technical validation lifecycle , ensuring reproducibility and continuous monitoring post-deployment. Cloud-Native AI Infrastructure Architect AI services with cloud-native principles , including microservices, containers, and service mesh. Leverage Azure ML , Kubernetes , Terraform , and cloud-specific SDKs for full lifecycle management. Ensure compatibility with hybrid-cloud/on-premise environments and support constraints typical of engineering and industrial domains Qualifications Educational qualification: Masterís or Ph.D. in Computer Science, AI/ML, Software-Engineering, or a related technical discipline Experience: 15+ years in software development, including: Deep experience in AI/ML-based software systems Strong architectural leadership in enterprise software design Delivery experience in engineering-heavy and data-rich environments Mandatory/requires Skills: Programming : Python (required), Java, JS, Frontend/Backend Technologies, Databases C++ (bonus) AI/ML : TensorFlow, PyTorch, ONNX, scikit-learn, MLFlow(equivalents) LLM/GenAI : Knowledge of transformers, attention mechanisms, fine-tuning, prompt engineering Agentic AI : Familiarity with planning frameworks, autonomous agents, and orchestration layers Cloud Platforms : Azure (preferred), AWS or GCP; experience with Azure ML Studio and SDKs Data & Pipelines : Airflow, Kafka, Spark, Delta Lake, Parquet, SQL/NoSQL Architecture : Microservices, event-driven design, API gateways, gRPC/REST, secure multi-tenancy DevOps/MLOps : GitOps, Jenkins, Azure DevOps, Terraform, containerization (Docker, Helm, K8s) What You Bring Proven ability to bridge research and engineering in the AI/ML space with strong architectural clarity. Ability to translate ambiguous requirements into scalable design patterns . Deep understanding of the enterprise SDLC óincluding review cycles, compliance, testing, and cross-functional alignment. A mindset focused on continuous improvement, metrics-driven development , and transparent technical decision-making. Additional Information Why Bosch Research? At Bosch Research, you will be empowered to lead the architectural blueprint of AI/ML software products that make a tangible difference in industrial innovation. You will have the autonomy to architect with vision, scale with quality, and deliver with rigor—while collaborating with a global community of experts in AI, engineering, and embedded systems.

Posted 2 days ago

Apply

4.0 years

8 - 10 Lacs

Bengaluru

On-site

As passionate about our people as we are about our mission. Why Join Q2? Q2 is a leading provider of digital banking and lending solutions to banks, credit unions, alternative finance companies, and fintechs in the U.S. and internationally. Our mission is simple: build strong and diverse communities through innovative financial technology—and we do that by empowering our people to help create success for our customers. What Makes Q2 Special? Being as passionate about our people as we are about our mission. We celebrate our employees in many ways, including our “Circle of Awesomeness” award ceremony and day of employee celebration among others! We invest in the growth and development of our team members through ongoing learning opportunities, mentorship programs, internal mobility, and meaningful leadership relationships. We also know that nothing builds trust and collaboration like having fun. We hold an annual Dodgeball for Charity event at our Q2 Stadium in Austin, inviting other local companies to play, and community organizations we support to raise money and awareness together. Company Overview: PrecisionLender’s pricing and profitability platform helps commercial bank relationship managers make smart, real-time pricing decisions and deliver superior customer service. Andi®, our virtual pricing analyst, uses artificial intelligence to glean and deliver insights from the thousands of deals priced daily in the platform. Using PrecisionLender, banks grow faster with stronger and more profitable relationships. Our product is used globally by 200+ banks and 10,000+ relationship managers to price more than $1 trillion in commercial loans. What You’ll Do Here: As a Software Engineer in Test, you will be responsible for building solutions to testing problems. The primary focus of your work will be automation. You will be automating tests at all levels (unit, integration, end-to-end) for the PrecisionLender web app. These tests will run in our continuous integration environment, so they must be efficient, robust, and scalable. Your tests will enable code to be pushed to production with a high level of quality and the end-user in mind. You will work closely with Software Development teams to understand their needs and be able to create test coverage and scenarios that ensure we are building quality into our growing product base. If you like to solve problems, streamline operations, and see the result of the work you put in, then we have a place for you on our team! You will be expected to take individual responsibility for delivering value, work with more senior members on larger efforts, and take part in continuously improving our product and company. RESPONSIBILITIES: Creates and maintains automated test cases, executes test suites, reviews and diagnoses reported bugs, and ensures overall system quality prior to a customer release. Designs, develops, maintains, and troubleshoots automated suites of tests through continuous integration for value added feedback. Works with the engineering teams to derive testing requirements throughout the development cycle. Reproduces, debugs, and isolates problems and verify fixes. Works closely with software developers to create software artefacts including test plans, test cases, test procedures and test reports. Works cross functional areas with internal partner engineering teams in a disciplined agile environment. Estimates own testing tasks and works productively with minimum supervision while showing excellent team attitude. Participates in the performance testing and analysis framework for a web services architecture. EXPERIENCE AND KNOWLEDGE: 4 years college degree in Software Engineering, Computer Science or related technical disciplines 5+ of experience, preferably in either a Software Development Engineer in Test role Or Software Quality engineering. Solid experience in testing web-based applications, with hands-on expertise in UI automation scripting using Selenium (C#/Python/Java) and Selenium Grid. Strong expertise in testing web services (SOAP and RESTful APIs), with a focus on API automation. Proficiency in debugging complex web application issues through code reviews and log analysis. Good knowledge of OOP principles and data structures. Competence in working with any one or two of the databases such as Postgres, MySQL, Oracle, or NoSQL systems. Good to have experience with BDD and Gherkin language. Experience with Azure dev-ops pipeline / Jenkins or other continuous integration systems. Experience with tools & applications such as JIRA, Confluence, Browserstack/qTest, Github/Bitbucket. Must be detail oriented, analytical and creative thinker This position requires fluent written and oral communication in English. Health & Wellness Hybrid Work Opportunities Flexible Time Off Career Development & Mentoring Programs Health & Wellness Benefits, including competitive health insurance offerings and generous paid parental leave for eligible new parents Community Volunteering & Company Philanthropy Programs Employee Peer Recognition Programs – “You Earned it” Click here to find out more about the benefits we offer. Our Culture & Commitment: We’re proud to foster a supportive, inclusive environment where career growth, collaboration, and wellness are prioritized. And our benefits go beyond healthcare—offering resources for physical, mental, and professional well-being. Click here to find out more about the benefits we offer. Q2 employees are encouraged to give back through volunteer work and nonprofit support through our Spark Program ( see more ). We believe in making an impact—in the industry and in the community. We are an Equal Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, age, disability, genetic information, or veteran status.

Posted 2 days ago

Apply

10.0 years

2 - 11 Lacs

Bengaluru

On-site

Join our Team About this opportunity: We are looking for a Senior Machine Learning Engineer with 10+ years of experience to design, build, and deploy scalable machine learning systems in production. This is not a data science role — we are seeking an engineering-focused individual who can partner with data scientists to productionize models, own ML pipelines end-to-end, and drive reliability, automation, and performance of our ML infrastructure. You’ll work on mission-critical systems where robustness, monitoring, and maintainability are key. You should be experienced with modern MLOps tools, cloud platforms, containerization, and model serving at scale. What you will do: Design and build robust ML pipelines and services for training, validation, and model deployment. Work closely with data scientists, solution architects, DevOps engineers, etc. to align the components and pipelines with project goals and requirements. Communicate deviation from target architecture (if any). Cloud Integration: Ensuring compatibility with cloud services of AWS, and Azure for enhanced performance and scalability Build reusable infrastructure components using best practices in DevOps and MLOps. Security and Compliance: Adhering to security standards and regulatory compliance, particularly in handling confidential and sensitive data. Network Security: Design optimal network plan for given Cloud Infrastructure under the E// network security guidelines Monitor model performance in production and implement drift detection and retraining pipelines. Optimize models for performance, scalability, and cost (e.g., batching, quantization, hardware acceleration). Documentation and Knowledge Sharing: Creating detailed documentation and guidelines for the use and modification of the developed components. The skills you bring: Strong programming skills in Python Deep experience with ML frameworks (TensorFlow, PyTorch, Scikit-learn, XGBoost). Hands-on with MLOps tools like MLflow, Airflow, TFX, Kubeflow, or BentoML. Experience deploying models using Docker and Kubernetes. Strong knowledge of cloud platforms (AWS/GCP/Azure) and ML services (e.g., SageMaker, Vertex AI). Proficiency with data engineering tools (Spark, Kafka, SQL/NoSQL). Solid understanding of CI/CD, version control (Git), and infrastructure as code (Terraform, Helm). Experience with monitoring/logging (Prometheus, Grafana, ELK). Good-to-Have Skills Experience with feature stores (Feast, Tecton) and experiment tracking platforms. Knowledge of edge/embedded ML, model quantization, and optimization. Familiarity with model governance, security, and compliance in ML systems. Exposure to on-device ML or streaming ML use cases. Experience leading cross-functional initiatives or mentoring junior engineers. Why join Ericsson? At Ericsson, you´ll have an outstanding opportunity. The chance to use your skills and imagination to push the boundaries of what´s possible. To build solutions never seen before to some of the world’s toughest problems. You´ll be challenged, but you won’t be alone. You´ll be joining a team of diverse innovators, all driven to go beyond the status quo to craft what comes next. What happens once you apply? Click Here to find all you need to know about what our typical hiring process looks like. Encouraging a diverse and inclusive organization is core to our values at Ericsson, that's why we champion it in everything we do. We truly believe that by collaborating with people with different experiences we drive innovation, which is essential for our future growth. We encourage people from all backgrounds to apply and realize their full potential as part of our Ericsson team. Ericsson is proud to be an Equal Opportunity Employer. learn more. Primary country and city: India (IN) || Bangalore Req ID: 770160

Posted 2 days ago

Apply

5.0 years

0 Lacs

Bengaluru

On-site

Job Description: Senior/Azure Data Engineer Job Location : Hyderabad / Bangalore / Chennai / Kolkata / Noida/ Gurgaon / Pune / Indore / Mumbai At least 5+ years’ of relevant hands on development experience as Azure Data Engineering role Proficient in Azure technologies like ADB, ADF, SQL(capability of writing complex SQL queries), ADB, PySpark, Python, Synapse, Delta Tables, Unity Catalog Hands on in Python, PySpark or Spark SQL Hands on in Azure Analytics and DevOps Taking part in Proof of Concepts (POCs) and pilot solutions preparation Ability to conduct data profiling, cataloguing, and mapping for technical design and construction of technical data flows Experience in business processing mapping of data and analytics solutions At DXC Technology, we believe strong connections and community are key to our success. Our work model prioritizes in-person collaboration while offering flexibility to support wellbeing, productivity, individual work styles, and life circumstances. We’re committed to fostering an inclusive environment where everyone can thrive. Recruitment fraud is a scheme in which fictitious job opportunities are offered to job seekers typically through online services, such as false websites, or through unsolicited emails claiming to be from the company. These emails may request recipients to provide personal information or to make payments as part of their illegitimate recruiting process. DXC does not make offers of employment via social media networks and DXC never asks for any money or payments from applicants at any point in the recruitment process, nor ask a job seeker to purchase IT or other equipment on our behalf. More information on employment scams is available here .

Posted 2 days ago

Apply

4.0 years

6 - 10 Lacs

Bengaluru

On-site

At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. Azure Data Engineer + Power BI Senior– Consulting As part of our GDS Consulting team, you will be part of NCLC team delivering specific to Microsoft account. You will be working on latest Microsoft BI technologies and will collaborate with other teams within Consulting services. The opportunity We’re looking for resources with expertise in Microsoft BI, Power BI, Azure Data Factory, Data Bricks to join the group of our Data Insights team. This is a fantastic opportunity to be part of a leading firm whilst being instrumental in the growth of our service offering. Your key responsibilities Responsible for managing multiple client engagements. Understand and analyse business requirements by working with various stakeholders and create the appropriate information architecture, taxonomy and solution approach Work independently to gather requirements, cleansing extraction and loading of data Translate business and analyst requirements into technical code Create interactive and insightful dashboards and reports using Power BI, connecting to various data sources and implementing DAX calculations. Design and build complete ETL/Azure Data Factory processes moving and transforming data for ODS, Staging, and Data Warehousing Design and development of solutions in Data Bricks, Scala, Spark, SQL to process and analyze large datasets, perform data transformations, and build data models. Design SQL Schema, Database Schema, Stored procedures, function, and T-SQL queries. Skills and attributes for success Collaborating with other members of the engagement team to plan the engagement and develop work program timelines, risk assessments and other documents/templates. Able to manage Senior stakeholders. Experience in leading teams to execute high quality deliverables within stipulated timeline. Skills in PowerBI, Azure Data Factory, Databricks, Azure Synapse, Data Modelling, DAX, Power Query, Microsoft Fabric Strong proficiency in Power BI, including data modelling, DAX, and creating interactive visualizations. Solid experience with Azure Databricks, including working with Spark, PySpark (or Scala), and optimizing big data processing. Good understanding of various Azure services relevant to data engineering, such as Azure Blob Storage, ADLS Gen2, Azure SQL Database/Synapse Analytics Strong SQL Skills and experience with of one of the following: Oracle, SQL, Azure SQL. Good to have experience in SSAS or Azure SSAS and Agile Project Management. Basic Knowledge on Azure Machine Learning services. Excellent Written and Communication Skills and ability to deliver technical demonstrations Quick learner with “can do” attitude Demonstrating and applying strong project management skills, inspiring teamwork and responsibility with engagement team members To qualify for the role, you must have A bachelor's or master's degree A minimum of 4-7 years of experience, preferably background in a professional services firm. Excellent communication skills with consulting experience preferred Ideally, you’ll also have Analytical ability to manage multiple projects and prioritize tasks into manageable work products. Can operate independently or with minimum supervision What working at EY offers At EY, we’re dedicated to helping our clients, from start–ups to Fortune 500 companies — and the work we do with them is as varied as they are. You get to work with inspiring and meaningful projects. Our focus is education and coaching alongside practical experience to ensure your personal development. We value our employees and you will be able to control your own development with an individual progression plan. You will quickly grow into a responsible role with challenging and stimulating assignments. Moreover, you will be part of an interdisciplinary environment that emphasizes high quality and knowledge exchange. Plus, we offer: Support, coaching and feedback from some of the most engaging colleagues around Opportunities to develop new skills and progress your career The freedom and flexibility to handle your role in a way that’s right for you EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today.

Posted 2 days ago

Apply

8.0 years

30 - 35 Lacs

Bengaluru

Remote

Location : Preferred Hyd or Bangalore but can be remote for right candidate. Exp: 8-10+ years Skillset Design and implement robust, production-grade pipelines using Python, Spark SQL, and Airflow to process high-volume file-based datasets (CSV, Parquet, JSON). Own the full lifecycle of core pipelines — from file ingestion to validated, queryable datasets — ensuring high reliability and performance. Build resilient, idempotent transformation logic with data quality checks, validation layers, and observability. Refactor and scale existing pipelines to meet growing data and business needs. Tune Spark jobs and optimize distributed processing performance. Implement schema enforcement and versioning aligned with internal data standards. Collaborate deeply with Data Analysts, Data Scientists, Product Managers, Engineering, Platform, SMEs, and AMs to ensure pipelines meet evolving business needs. Monitor pipeline health, participate in on-call rotations, and proactively debug and resolve production data flow issues. Contribute to the evolution of our data platform — driving toward mature patterns in observability, testing, and automation. Build and enhance streaming pipelines (Kafka, SQS, or similar) where needed to support near-real-time data needs. Help develop and champion internal best practices around pipeline development and data modeling. Experience 8-10 years of experience as a Data Engineer (or equivalent), building production-grade pipelines. Strong expertise in Python, Spark SQL, and Airflow. Experience processing large-scale file-based datasets (CSV, Parquet, JSON, etc) in production environments. Experience mapping and standardizing raw external data into canonical models. Familiarity with AWS (or any cloud), including file storage and distributed compute concepts. Ability to work across teams, manage priorities, and own complex data workflows with minimal supervision. Strong written and verbal communication skills — able to explain technical concepts to non-engineering partners. Comfortable designing pipelines from scratch and improving existing pipelines. Experience working with large-scale or messy datasets (healthcare, financial, logs, etc.). Experience building or willingness to learn streaming pipelines using tools such as Kafka or SQS. Bonus: Familiarity with healthcare data (837, 835, EHR, UB04, claims normalization). Please share your updated resume with below details: Highest Education: Total and Relevant Exp: CCTC: ECTC: Any offer in hand or in pipeline: Notice period:Current location: Job Type: Full-time Pay: ₹3,000,000.00 - ₹3,500,000.00 per year Benefits: Health insurance Schedule: Day shift Supplemental Pay: Performance bonus

Posted 2 days ago

Apply

5.0 - 7.0 years

7 - 9 Lacs

Bengaluru

On-site

As passionate about our people as we are about our mission. Why Join Q2? Q2 is a leading provider of digital banking and lending solutions to banks, credit unions, alternative finance companies, and fintechs in the U.S. and internationally. Our mission is simple: build strong and diverse communities through innovative financial technology—and we do that by empowering our people to help create success for our customers. What Makes Q2 Special? Being as passionate about our people as we are about our mission. We celebrate our employees in many ways, including our “Circle of Awesomeness” award ceremony and day of employee celebration among others! We invest in the growth and development of our team members through ongoing learning opportunities, mentorship programs, internal mobility, and meaningful leadership relationships. We also know that nothing builds trust and collaboration like having fun. We hold an annual Dodgeball for Charity event at our Q2 Stadium in Austin, inviting other local companies to play, and community organizations we support to raise money and awareness together. Key Responsibilities: Code Review: Perform detailed code reviews to ensure best practices, coding standards, and security protocols are followed. Provide constructive feedback to developers and recommend improvements to maintain code quality. Technical Investigations: Conduct in-depth technical investigations into complex issues, identify root causes, and develop solutions to prevent recurrence. Collaborate with cross-functional teams to implement fixes and improvements. Priority 1 (P1) Incident Management: Lead the response to P1 incidents, coordinating across teams to ensure timely and effective resolution. Communicate effectively with stakeholders during incidents and provide post-incident analysis and documentation. Root Cause Analysis (RCA): Conduct RCAs for major incidents, identifying corrective actions to mitigate future risks. Provide insights on incident trends and recommend proactive measures. Technical Documentation: Create and maintain detailed documentation for code reviews, incident investigations, and RCAs. Ensure knowledge is shared effectively across teams for continuous improvement. Continuous Improvement: Identify areas of improvement in processes and tools, propose enhancements, and lead initiatives to improve code quality, incident response, and system reliability. Qualifications: Education: Bachelor’s degree in Computer Science, Software Engineering, or related field. Experience: 5-7 years in a technical role involving code review, incident investigation, and incident management. Technical Skills: Proficiency in Python, SQL , Batch Scripting Strong understanding of software architecture, debugging, and troubleshooting. Experience with incident management tools like Salesforce and code review tools Git, Bitbucket. Problem-Solving Skills: Ability to analyze complex issues quickly and determine effective solutions. Strong critical thinking and troubleshooting skills. Communication: Excellent verbal and written communication skills, with the ability to convey technical concepts to both technical and non-technical stakeholders. Preferred Qualifications: Certifications: Relevant certifications such as ITIL, AWS Certified Developer, or Certified ScrumMaster. Additional Experience: Experience with DevOps practices, CI/CD pipelines, and automation tools. Why Join Us? Opportunity to work in a dynamic, fast-paced environment where your contributions directly impact our products and services. Collaborative team culture with a focus on continuous learning and professional growth. Competitive salary and benefits, with opportunities for advancement. This position requires fluent written and oral communication in English. Health & Wellness Hybrid Work Opportunities Flexible Time Off Career Development & Mentoring Programs Health & Wellness Benefits, including competitive health insurance offerings and generous paid parental leave for eligible new parents Community Volunteering & Company Philanthropy Programs Employee Peer Recognition Programs – “You Earned it” Click here to find out more about the benefits we offer. Our Culture & Commitment: We’re proud to foster a supportive, inclusive environment where career growth, collaboration, and wellness are prioritized. And our benefits go beyond healthcare—offering resources for physical, mental, and professional well-being. Click here to find out more about the benefits we offer. Q2 employees are encouraged to give back through volunteer work and nonprofit support through our Spark Program ( see more ). We believe in making an impact—in the industry and in the community. We are an Equal Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, age, disability, genetic information, or veteran status.

Posted 2 days ago

Apply

12.0 years

9 - 9 Lacs

Bengaluru

On-site

Join us as we work to create a thriving ecosystem that delivers accessible, high-quality, and sustainable healthcare for all. athenahealth is a progressive, innovation-driven software product company. We partner with healthcare organizations across the care continuum to drive clinical and financial results. Our expert teams build modern technology on an open, connected ecosystem, yielding insights that make a difference for our customers and their patients. We maintain a unique values-driven employee culture and offer a flexible work-life balance. As evidence of our rapid growth and industry leadership, we were acquired by the world’s leading private equity firm “Bain Capital” in 2022 for $17bn! and we have many new strategic product initiatives Position Summary: We are looking for a Senior Engineering Manager to lead the Identity and Access management teams within our Data & Ecosystem platform. The Identity & Access management teams provide foundational security workflows and infrastructure to secure, accelerate, and connect all athenaOne ecosystem participants. As an Engineering manager, you will partner with Product to drive the success and evolution of our Identity services. You are committed to developing your engineers and fostering a culture of innovation, accountability, results, and high-performing teams. As a seasoned technical leader, you have a passion for architecting and building scalable solutions that empower development teams to thrive. You're a problem solver who excels at navigating complex technical challenges and driving innovative solutions that deliver impact. As a people leader, you are passionate about building inclusive and outcome-focused engineering teams; and committed to supporting and developing your engineers to deliver successful team outcomes. You bring curiosity, empathy, and an open mind to everything you do. If that sounds like you, and you’re interested in driving a forward-leaning approach to healthcare, we’d love you to join the team. Job Responsibilities Lead and manage a high-performing engineering team, focusing on talent development, mentoring, and coaching. Oversee performance management, including goal setting, feedback, and evaluation. Participate in recruitment efforts, ensuring the team has the necessary skills and expertise. Work with product managers, stakeholders, and other teams to align on project goals and deliverables. Ensure effective communication and manage stakeholder expectations. Manage project timelines, deliverables, and resource allocation to ensure successful project execution. Facilitate technical design discussions and ensure the production of accurate, unambiguous technical design specifications. Ensure high-quality software delivery by adhering to athena's policies, latest and emerging trends and technologies. Ensure the platform meets high-quality standards, including performance, scalability, and reliability. Champion a culture of quality, innovation and continuous improvement Stay up-to date with the latest industry trends, technologies, and best practices in software engineering and modernization. Identify opportunities for further optimization, automation, and innovation within the engineering teams and the overall technology landscape. Encourage a culture of continuous learning and knowledge sharing among the teams, fostering the adoption of new tools, frameworks, and methodologies. Collaborate with the broader organization to align the engineering roadmap with the company’s strategic objectives and long-term vision Qualifications 12+ years of experience in software development, with at least 3+ years of experience in leading engineering teams Experience in leading high-performing engineering teams, with a focus on mentoring, coaching, and talent development. Ability to develop and execute technical roadmaps, aligning with business objectives and customer needs Strong communication skills, with experience working across geographies and functions, including engineering, product, and business stakeholders. Ability to identify and adopt new technologies and methodologies, driving innovation and technical excellence. Familiarity with programming languages such as Typescript, React, Node.js, and best practices in software development. Proven track record of technical leadership, team management, and strategic planning. Experience with AWS services such as ECS, EKS, Cloud watch, Lambda, and CloudFormation, with a strong understanding of cloud security and compliance. Familiarity with front-end and back-end technologies, including Node.js, React, and Typescript. Experience designing and implementing scalable, secure, and highly available systems. Strong understanding of Agile methodologies, including unit and integration testing, design and code reviews, and documentation. Experience with tools like Jenkins, Git, Bitbucket, and other source code repositories. Knowledge of identity and access management (IAM) systems, including Okta, Auth0, or similar technologies is a plus Familiarity with platform engineering principles, including building and maintaining scalable, secure, and highly available platforms. About athenahealth Our vision: In an industry that becomes more complex by the day, we stand for simplicity. We offer IT solutions and expert services that eliminate the daily hurdles preventing healthcare providers from focusing entirely on their patients — powered by our vision to create a thriving ecosystem that delivers accessible, high-quality, and sustainable healthcare for all. Our company culture: Our talented employees — or athenistas, as we call ourselves — spark the innovation and passion needed to accomplish our vision. We are a diverse group of dreamers and do-ers with unique knowledge, expertise, backgrounds, and perspectives. We unite as mission-driven problem-solvers with a deep desire to achieve our vision and make our time here count. Our award-winning culture is built around shared values of inclusiveness, accountability, and support. Our DEI commitment: Our vision of accessible, high-quality, and sustainable healthcare for all requires addressing the inequities that stand in the way. That's one reason we prioritize diversity, equity, and inclusion in every aspect of our business, from attracting and sustaining a diverse workforce to maintaining an inclusive environment for athenistas, our partners, customers and the communities where we work and serve. What we can do for you: Along with health and financial benefits, athenistas enjoy perks specific to each location, including commuter support, employee assistance programs, tuition assistance, employee resource groups, and collaborative workspaces — some offices even welcome dogs. We also encourage a better work-life balance for athenistas with our flexibility. While we know in-office collaboration is critical to our vision, we recognize that not all work needs to be done within an office environment, full-time. With consistent communication and digital collaboration tools, athenahealth enables employees to find a balance that feels fulfilling and productive for each individual situation. In addition to our traditional benefits and perks, we sponsor events throughout the year, including book clubs, external speakers, and hackathons. We provide athenistas with a company culture based on learning, the support of an engaged team, and an inclusive environment where all employees are valued. Learn more about our culture and benefits here: athenahealth.com/careers https://www.athenahealth.com/careers/equal-opportunity

Posted 2 days ago

Apply

5.0 years

6 - 10 Lacs

Bengaluru

On-site

Job Description About us: As a Fortune 50 company with more than 400,000 team members worldwide, Target is an iconic brand and one of America's leading retailers. Target in India operates as a fully integrated part of Target’s global team and has more than 4,000 team members supporting the company’s global strategy and operations. Tech Overview: Every time a guest enters a Target store or browses Target.com, they experience the impact of Target’s investments in technology and innovation. We’re the technologists behind one of the most loved retail brands, delivering joy to millions of our guests, team members, and communities. Join our global in-house technology team of more than 4,000 of engineers, data scientists, architects, coaches and product managers striving to make Target the most convenient, safe and joyful place to shop. We use agile practices and leverage open-source software to adapt and build best-in-class technology for our team members and guests—and we do so with a focus on diversity and inclusion, experimentation and continuous learning. Pyramid Overview: Target.com and Mobile translates the in-store experience our guests love to the digital environment. Our Mobile Engineers develop native apps like Cartwheel and Target’s flagship app, which are high-impact and high-visibility assets that are game-changers for literally millions of guests. Here, you’ll get to explore emerging retail and mobile technologies, playing a key role in revolutionary product launches with tech giants like Apple and Google. You’ll be a visionary for the future of Target’s app ecosystem. You’ll have the advantage of Target’s unmatched brand recognition and special marketplace foothold—making us the partner of choice for innovative technologies like indoor mapping, iBeacons and Apple Pay. You’ll help Target evolve by using the latest open source tools and technologies and staying true to strong agile practices. You’ll lend your passion for engineering technologies that fix problems and meet needs guests didn’t even know they had. You’ll work on autonomous teams and incorporate the newest technical practices. You’ll have the chance to perform by writing rock-solid code that stands up to our massive scale. Plus, and perhaps best of all, you’ll have the right balance of self-rule and accountability for how technical products perform. Team Overview: We are dedicated to ensuring a seamless and efficient checkout experience for Guests shopping on our digital channels, including web and mobile apps. Our team plays a crucial role in the overall shopping journey, focusing on the final and most critical steps of the purchase process. We are responsible for managing the entire checkout lifecycle , from the moment a Guest adds an item to their cart to the final purchase confirmation. Our goal is to provide a smooth, secure, and user-friendly checkout process that enhances customer satisfaction and drives conversions. Our team is cross-geo located, with members driving different features and collaborating from both India and the US. This diverse setup allows us to leverage a wide range of expertise and perspectives, fostering innovative solutions and effective problem-solving. As part of the Digital Checkout team, you will have the opportunity to work with cutting-edge technologies and innovative solutions to continuously improve the checkout experience. Our collaborative and dynamic environment encourages creative problem-solving and the sharing of ideas to meet the evolving needs of our Guests. Position Overview: 5+ years of experience in software design & development with 3+ years of experience in building scalable backend applications using Java Demonstrates broad and deep expertise in Java/Kotlin and frameworks. Designs, develops, and approves end-to-end functionality of a product line, platform, or infrastructure. Communicates and coordinates with project team, partners, and stakeholders. Demonstrates expertise in analysis and optimization of systems capacity, performance, and operational health. Maintains deep technical knowledge within areas of expertise. Stays current with new and evolving technologies via formal training and self-directed education. Experience integrating with third party and opensource frameworks. About You: 4 year degree or equivalent experience Experience: 4 years -7 years Programming experience with Java - Springboot & Kotlin - micronaut Strong problem-solving skills with a good understanding of data structures and algorithms. Must have exposure to non-relational databases like MongoDB. Must have exposure to distributed systems and microservice architecture. Good to Have exposure to Data Pipeline, ML Ops, Spark, Python Demonstrates a solid understanding of the impact of own work on the team and/or guests Writes and organizes code using multiple computer languages, including distributed programming and understand different frameworks and paradigm Delivers high-performance, scalable, repeatable, and secure deliverables with broad impact (high throughput and low latency) Influences and applies data standards, policies, and procedures Maintains technical knowledge within areas of expertise Stays current with new and evolving technologies via formal training and self-directed education. Know More About Us Here: Life at Target- https://india.target.com/ Benefits- https://india.target.com/life-at-target/workplace/benefits Culture- https://india.target.com/life-at-target/belonging

Posted 2 days ago

Apply

0 years

3 - 6 Lacs

Bengaluru

On-site

Are you energized by being the first spark in a life-changing journey? Do you excel at spotting potential, igniting ambition, and expertly connecting dreams with the perfect guide? Is your drive fueled by transforming curious inquiries into confident first steps toward global education? If this feels like your calling, seize this pivotal opportunity as a Study Abroad Advisor (Associate) at Nbyula! We seek intuitive connectors who thrive at the starting line. Your mission? To be the compelling first voice for aspiring global students—qualifying their intent, assessing their potential, and masterfully pairing them with the ideal Study Abroad Coach. If you’re driven by the art of initial engagement, possess polished persuasion skills, and take pride in architecting powerful first connections, Nbyula is your stage. Mission: Become the pivotal first connection between dreams of global education and their realization. As the gateway to Nbyula’s transformative journey, you’ll ignite curiosity, uncover potential, and expertly match aspiring students with their ideal Study Abroad Coach. Your skill in identifying sparks of ambition fuels our mission to shape futures. Core Responsibilities: ☑ Lead Ignition & Qualification: Be the welcoming voice and digital first point of contact for prospective students. Rapidly assess lead intent, academic potential, and readiness through insightful conversations. Identify high-potential prospects using strategic questioning and active listening, ensuring only the most aligned leads advance to coaches. ☑ Persuasive Pathway Creation: Masterfully articulate Nbyula’s value in opening global doors, compelling leads to commit to exploratory sessions with senior coaches. Turn ambiguity into action, convert tentative inquiries into booked consultations with confidence and finesse. ☑ Matchmaking Excellence: Analyze lead profiles (goals, background, preferences) to pair them with the best-fit Study Abroad Coach. Curate briefs that equip coaches for personalized, impactful first sessions. ☑ Ecosystem Collaboration: Sync seamlessly with senior coaches and sales teams, sharing lead insights to refine strategies. Track and report lead quality trends to optimize engagement approaches. ☑ Journey Ambassador: Embody Nbyula’s ethos in every interaction—polished, empathetic, and future-focused. Maintain meticulous lead records and nurture early-stage prospects through tailored follow-ups. Who you are: ◙ Your Curiosity is Magnetic: You ask the right questions intuitively, uncovering dreams and hesitations in equal measure. Conversations are your discovery playground. ◙ Communication is Your Compass: You navigate chats with clarity and warmth, transforming complex journeys into exciting, understandable next steps. Persuasion feels natural, not pushy. ◙ Resilience is Your Rhythm: Rejections are pauses in the symphony. You bounce back with infectious energy, turning "maybes" into "let’s talks." ◙ Adaptable & Tech-Savvy: You thrive in flux, embracing new tools (CRMs, analytics) to streamline your craft. Change is your canvas for more intelligent workflows. ◙ Collaborative Catalyst: You amplify team success. Sharing insights, supporting peers, and celebrating collective wins is in your DNA. ◙ Advantageous Edge: Familiarity with lead management systems or sales tech is a welcome bonus, not a barrier. ✰Perks: Compensation that rewards your mastery, supplemented with performance-driven incentives. A wholesome package of training and developmental avenues that constantly enrich your skill set. An ecosystem fostering innovation, where every voice harmonizes into the choir of progress. A chance to script your chapter in Nbyula's success saga celebrated with fervor. Who is an ideal match for being a terraformer at Nbyula? All the attributes that we are looking for in an ideal teammate. Openness- We welcome people from different backgrounds and schools of thought, Terraformers are open to different perspectives in approaching a solution and not just limit their thoughts or ideas to only a specific domain Conscientiousness- We believe in working together for the larger goal and with complete dedication and not just for personal benefits, however we do not expect terraformers to work to the point of burnout Humility- Being humble, grateful and respectful are the core traits of terraformers, we do not expect people to agree with every view of the management, feel free to have a different perspective but we always expect it to be put forward with respect Risk Takers- Terraformers are not afraid of the unknown and are open to new things, not that we encourage extreme risks without weighing the consequences but we are ones who take calculated risks Autodidacts- Terraformers teach themselves to learn, we do our own research to get solutions, we do not expect you to have a blank slate and figure everything out yourself, we are here to guide you but not handhold and micromanage you Self-Actualization- Terraformers are on the path of self-actualization, we are not bothered by the noise and distractions around us, we only work towards achieving our full potential. We do not expect you to over-burden yourself and not have fun but we expect you to work to the best of your capabilities About Us: Nbyula is a German technology brand headquartered in Berlin with a Research & Development Center in Bengaluru, Karnataka, operational since 2014. Nbyula believes in creating an open world, where opportunities and talent are not hindered by borders or bureaucracies. Nbyula is materializing this by leveraging the bleeding edge of technologies like cloud, distributed computing, crowdsourcing, automation, gamification, and many more. The North Star is to create a horizontal marketplace encompassing people, content, products & services for international work and studies, to enable, train and empower "Skillizens without Borders''. To know more about us, please visit https://nbyula.com/about-us Find your future at Nbyula! For any queries about this position or how to apply, feel free to write to people@nbyula.com *Terraformers-The term 'Terraformers' refers to and is a sci-fi reference for planetary engineers- crafting entire terrains, hydrospheres, lithospheres, and atmospheres, to make the planet habitable for life forms. In Nbyula terms, this is analogous to discovering, shaping, and settling new worlds for Skillizens without Borders. Job Types: Full-time, Permanent Pay: ₹30,000.00 - ₹50,000.00 per month Benefits: Cell phone reimbursement Health insurance Leave encashment Paid sick time Paid time off Provident Fund Application Question(s): How many years of Business Development experience do you currently have? How many years of experience do you currently have in the Study Abroad domain? How many years of start-up experience do you currently have? This is an onsite role with 6 working days a week. Are you comfortable with that? What is your current CTC? (Please mention the fixed component per annum) We must fill this position urgently. Can you start immediately? What is your notice period? (mention the number of days) Work Location: In person

Posted 2 days ago

Apply

0.0 - 1.0 years

0 - 0 Lacs

Mohali, Punjab

On-site

About Net Spark Solutions: Net Spark Solutions is a leading digital solutions provider delivering innovative web design and development services with a team of skilled industry experts. We specialize in creating fully functional digital solutions that help businesses grow, reach global audiences, and boost revenue. Experience: 1 Year to 2 Years Job Type: Full-time Mode: Work from Office Location: Mohali, Punjab Job Overview We’re looking for a Shopify Developer with 1–2 years of experience to build and customize Shopify stores from design files (Figma, PSD, XD, etc.). You’ll work on theme development, app integration, performance optimization, and troubleshooting. Strong skills in Liquid, HTML, CSS, JavaScript, and jQuery are required. Experience with Shopify Admin, third-party APIs, and eCommerce best practices is a plus. Key Responsibilities: Convert design mockups (Figma, PSD, Adobe XD, etc.) into responsive, fully functional Shopify stores Develop, customize, and maintain Shopify themes to meet business requirements Identify and fix issues related to storefronts, checkout flow, and site performance Integrate third-party tools, APIs, and Shopify apps to enhance store functionality Optimize websites for speed, SEO, and user conversions Requirements: 1–2 years of hands-on experience in Shopify development Proficient in Liquid, HTML, CSS, JavaScript, and jQuery Strong experience in theme customization and app integrations Familiar with Shopify Admin, and knowledge of Shopify Plus is a plus Solid understanding of eCommerce workflows and best practices Good communication and teamwork skills Bachelor's degree in computer science, IT, or a related field Why Join Net Spark Solutions? Work on International Projects – Gain valuable hands-on experience by contributing to global digital solutions. Flexible Work Timings – Enjoy a better work-life balance with flexible scheduling options. Positive & Friendly Environment – Be part of a supportive team culture that values collaboration and growth. Learning & Development – Access mentorship, training resources, and professional courses to grow your skills and career. Job Type: Full-time Pay: ₹15,000.00 - ₹30,000.00 per month Schedule: Day shift Ability to commute/relocate: Mohali, Punjab: Reliably commute or planning to relocate before starting work (Required) Education: Bachelor's (Required) Experience: HTML, CSS, JavaScript, and jQuery:: 1 year (Required) Develop and customize Shopify themes: 1 year (Required) Implement third-party tools and APIs: 1 year (Required) Work Location: In person

Posted 2 days ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies