Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
8.0 years
0 Lacs
Trivandrum, Kerala, India
On-site
Job Description Data Architect – Databricks (Azure/AWS) Role Overview: We are seeking an experienced Data Architect specializing in Databricks to lead the architecture, design, and migration of enterprise data workloads from on-premises systems (e.g., Oracle, Exadata, Hadoop) to Databricks on Azure or AWS . The role involves designing scalable, secure, and high-performing data platforms based on the medallion architecture (bronze, silver, gold layers), supporting large-scale ingestion, transformation, and publishing of data. Required Skills and Experience: 8+ years of experience in data architecture or engineering roles, with at least 3+ years specializing in cloud-based big data solutions. Hands-on expertise with Databricks on Azure or AWS. Deep understanding of Delta Lake, medallion architecture (bronze/silver/gold zones), and data governance tools (e.g., Unity Catalog, Purview). Strong experience migrating large datasets and batch/streaming pipelines from on-prem to Databricks. Expertise with Spark (PySpark/Scala) at scale and optimizing Spark jobs. Familiarity with ingestion from RDBMS (Oracle, SQL Server) and legacy Hadoop ecosystems. Proficiency in orchestration tools (Databricks Workflows, Airflow, Azure Data Factory, AWS Glue Workflows). Strong understanding of cloud-native services for storage, compute, security, and networking. Preferred Qualifications: Databricks Certified Data Engineer or Architect. Azure/AWS cloud certifications. Experience with real-time/streaming ingestion (Kafka, Event Hubs, Kinesis). Familiarity with data quality frameworks (e.g., Deequ, Great Expectations). Responsibilities Key Responsibilities: Define and design cloud-native data architecture on Databricks using Delta Lake, Unity Catalog, and related services. Develop migration strategies for moving on-premises data workloads (Oracle, Hadoop, Exadata, etc.) to Databricks on Azure/AWS. Architect and oversee data pipelines supporting ingestion, curation, transformation, and analytics in a multi-layered (bronze/silver/gold) model. Lead data modeling, schema design, performance optimization, and data governance best practices. Collaborate with data engineering, platform, and security teams to build production-ready solutions. Create standards for ingestion frameworks, job orchestration (e.g., workflows, Airflow), and data quality validation. Support cost optimization, scalability design, and operational monitoring frameworks. Guide and mentor engineering teams during the build and migration phases. Attributes for Success: Ability to lead architecture discussions with technical and business stakeholders. Passion for modern cloud data architectures and continuous learning. Pragmatic and solution-driven approach to migrations. Diversity and Inclusion : An Oracle career can span industries, roles, Countries, and cultures, allowing you to flourish in new roles and innovate while blending work life in. Oracle has thrived through 40+ years of change by innovating and operating with integrity while delivering for the top companies in almost every industry. To nurture the talent that makes this happen, we are committed to an inclusive culture that celebrates and values diverse insights and perspectives, a workforce that inspires thought leadership and innovation. Oracle offers a highly competitive suite of Employee Benefits designed on the principles of parity, consistency, and affordability. The overall package includes certain core elements such as Medical, Life Insurance, access to Retirement Planning, and much more. We also encourage our employees to engage in the culture of giving back to the communities where we live and do business. At Oracle, we believe that innovation starts with diversity and inclusion and to create the future we need talent from various backgrounds, perspectives, and abilities. We ensure that individuals with disabilities are provided reasonable accommodation to successfully participate in the job application, and interview process, and in potential roles. To perform crucial job functions. That’s why we’re committed to creating a workforce where all individuals can do their best work. It’s when everyone’s voice is heard and valued that we’re inspired to go beyond what’s been done before. About Us As a world leader in cloud solutions, Oracle uses tomorrow’s technology to tackle today’s challenges. We’ve partnered with industry-leaders in almost every sector—and continue to thrive after 40+ years of change by operating with integrity. We know that true innovation starts when everyone is empowered to contribute. That’s why we’re committed to growing an inclusive workforce that promotes opportunities for all. Oracle careers open the door to global opportunities where work-life balance flourishes. We offer competitive benefits based on parity and consistency and support our people with flexible medical, life insurance, and retirement options. We also encourage employees to give back to their communities through our volunteer programs. We’re committed to including people with disabilities at all stages of the employment process. If you require accessibility assistance or accommodation for a disability at any point, let us know by emailing accommodation-request_mb@oracle.com or by calling +1 888 404 2494 in the United States. Oracle is an Equal Employment Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, sexual orientation, gender identity, disability and protected veterans’ status, or any other characteristic protected by law. Oracle will consider for employment qualified applicants with arrest and conviction records pursuant to applicable law.
Posted 1 week ago
8.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Job Description Data Architect – Databricks (Azure/AWS) Role Overview: We are seeking an experienced Data Architect specializing in Databricks to lead the architecture, design, and migration of enterprise data workloads from on-premises systems (e.g., Oracle, Exadata, Hadoop) to Databricks on Azure or AWS . The role involves designing scalable, secure, and high-performing data platforms based on the medallion architecture (bronze, silver, gold layers), supporting large-scale ingestion, transformation, and publishing of data. Required Skills and Experience: 8+ years of experience in data architecture or engineering roles, with at least 3+ years specializing in cloud-based big data solutions. Hands-on expertise with Databricks on Azure or AWS. Deep understanding of Delta Lake, medallion architecture (bronze/silver/gold zones), and data governance tools (e.g., Unity Catalog, Purview). Strong experience migrating large datasets and batch/streaming pipelines from on-prem to Databricks. Expertise with Spark (PySpark/Scala) at scale and optimizing Spark jobs. Familiarity with ingestion from RDBMS (Oracle, SQL Server) and legacy Hadoop ecosystems. Proficiency in orchestration tools (Databricks Workflows, Airflow, Azure Data Factory, AWS Glue Workflows). Strong understanding of cloud-native services for storage, compute, security, and networking. Preferred Qualifications: Databricks Certified Data Engineer or Architect. Azure/AWS cloud certifications. Experience with real-time/streaming ingestion (Kafka, Event Hubs, Kinesis). Familiarity with data quality frameworks (e.g., Deequ, Great Expectations). Responsibilities Key Responsibilities: Define and design cloud-native data architecture on Databricks using Delta Lake, Unity Catalog, and related services. Develop migration strategies for moving on-premises data workloads (Oracle, Hadoop, Exadata, etc.) to Databricks on Azure/AWS. Architect and oversee data pipelines supporting ingestion, curation, transformation, and analytics in a multi-layered (bronze/silver/gold) model. Lead data modeling, schema design, performance optimization, and data governance best practices. Collaborate with data engineering, platform, and security teams to build production-ready solutions. Create standards for ingestion frameworks, job orchestration (e.g., workflows, Airflow), and data quality validation. Support cost optimization, scalability design, and operational monitoring frameworks. Guide and mentor engineering teams during the build and migration phases. Attributes for Success: Ability to lead architecture discussions with technical and business stakeholders. Passion for modern cloud data architectures and continuous learning. Pragmatic and solution-driven approach to migrations. Diversity and Inclusion : An Oracle career can span industries, roles, Countries, and cultures, allowing you to flourish in new roles and innovate while blending work life in. Oracle has thrived through 40+ years of change by innovating and operating with integrity while delivering for the top companies in almost every industry. To nurture the talent that makes this happen, we are committed to an inclusive culture that celebrates and values diverse insights and perspectives, a workforce that inspires thought leadership and innovation. Oracle offers a highly competitive suite of Employee Benefits designed on the principles of parity, consistency, and affordability. The overall package includes certain core elements such as Medical, Life Insurance, access to Retirement Planning, and much more. We also encourage our employees to engage in the culture of giving back to the communities where we live and do business. At Oracle, we believe that innovation starts with diversity and inclusion and to create the future we need talent from various backgrounds, perspectives, and abilities. We ensure that individuals with disabilities are provided reasonable accommodation to successfully participate in the job application, and interview process, and in potential roles. To perform crucial job functions. That’s why we’re committed to creating a workforce where all individuals can do their best work. It’s when everyone’s voice is heard and valued that we’re inspired to go beyond what’s been done before. About Us As a world leader in cloud solutions, Oracle uses tomorrow’s technology to tackle today’s challenges. We’ve partnered with industry-leaders in almost every sector—and continue to thrive after 40+ years of change by operating with integrity. We know that true innovation starts when everyone is empowered to contribute. That’s why we’re committed to growing an inclusive workforce that promotes opportunities for all. Oracle careers open the door to global opportunities where work-life balance flourishes. We offer competitive benefits based on parity and consistency and support our people with flexible medical, life insurance, and retirement options. We also encourage employees to give back to their communities through our volunteer programs. We’re committed to including people with disabilities at all stages of the employment process. If you require accessibility assistance or accommodation for a disability at any point, let us know by emailing accommodation-request_mb@oracle.com or by calling +1 888 404 2494 in the United States. Oracle is an Equal Employment Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, sexual orientation, gender identity, disability and protected veterans’ status, or any other characteristic protected by law. Oracle will consider for employment qualified applicants with arrest and conviction records pursuant to applicable law.
Posted 1 week ago
6.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Job Description Senior Data Engineer – Databricks (Azure/AWS) Role Overview: We are looking for a hands-on Senior Data Engineer experienced in migrating and building large-scale data pipelines on Databricks using Azure or AWS platforms. The role will focus on implementing batch and streaming pipelines, applying the bronze-silver-gold data lakehouse model, and ensuring scalable and reliable data solutions. Required Skills and Experience: 6+ years of hands-on data engineering experience, with 2+ years specifically working on Databricks in Azure or AWS. Proficiency in building and optimizing Spark pipelines (batch and streaming). Strong experience implementing bronze/silver/gold data models. Working knowledge of cloud storage systems (ADLS, S3) and compute services. Experience migrating data from RDBMS (Oracle, SQL Server) or Hadoop ecosystems. Familiarity with Airflow, Azure Data Factory, or AWS Glue for orchestration. Good scripting skills (Python, Scala, SQL) and version control (Git). Preferred Qualifications: Databricks Certified Data Engineer Associate or Professional. Experience with Delta Live Tables (DLT) and Databricks SQL. Understanding of cloud security best practices (IAM roles, encryption, ACLs). Responsibilities Key Responsibilities: Design, develop, and operationalize scalable data pipelines on Databricks following medallion architecture principles. Migrate and transform large data volumes from traditional on-prem systems (Oracle, Hadoop, Exadata) into cloud data platforms. Develop efficient Spark (PySpark/Scala) jobs for ingestion, transformation, aggregation, and publishing of data. Implement data quality checks, error handling, retries, and data validation frameworks. Build automation scripts and CI/CD pipelines for Databricks workflows and deployment. Tune Spark jobs and optimize cost and performance in cloud environments. Collaborate with data architects, product owners, and analytics teams. Attributes for Success: Strong analytical and problem-solving skills. Attention to scalability, resilience, and cost efficiency. Collaborative attitude and passion for clean, maintainable code. Diversity and Inclusion: An Oracle career can span industries, roles, Countries, and cultures, allowing you to flourish in new roles and innovate while blending work life in. Oracle has thrived through 40+ years of change by innovating and operating with integrity while delivering for the top companies in almost every industry. To nurture the talent that makes this happen, we are committed to an inclusive culture that celebrates and values diverse insights and perspectives, a workforce that inspires thought leadership and innovation. Oracle offers a highly competitive suite of Employee Benefits designed on the principles of parity, consistency, and affordability. The overall package includes certain core elements such as Medical, Life Insurance, access to Retirement Planning, and much more. We also encourage our employees to engage in the culture of giving back to the communities where we live and do business.At Oracle, we believe that innovation starts with diversity and inclusion and to create the future we need talent from various backgrounds, perspectives, and abilities. We ensure that individuals with disabilities are provided reasonable accommodation to successfully participate in the job application, and interview process, and in potential roles. To perform crucial job functions. That’s why we’re committed to creating a workforce where all individuals can do their best work. It’s when everyone’s voice is heard and valued that we’re inspired to go beyond what’s been done before. About Us As a world leader in cloud solutions, Oracle uses tomorrow’s technology to tackle today’s challenges. We’ve partnered with industry-leaders in almost every sector—and continue to thrive after 40+ years of change by operating with integrity. We know that true innovation starts when everyone is empowered to contribute. That’s why we’re committed to growing an inclusive workforce that promotes opportunities for all. Oracle careers open the door to global opportunities where work-life balance flourishes. We offer competitive benefits based on parity and consistency and support our people with flexible medical, life insurance, and retirement options. We also encourage employees to give back to their communities through our volunteer programs. We’re committed to including people with disabilities at all stages of the employment process. If you require accessibility assistance or accommodation for a disability at any point, let us know by emailing accommodation-request_mb@oracle.com or by calling +1 888 404 2494 in the United States. Oracle is an Equal Employment Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, sexual orientation, gender identity, disability and protected veterans’ status, or any other characteristic protected by law. Oracle will consider for employment qualified applicants with arrest and conviction records pursuant to applicable law.
Posted 1 week ago
8.0 years
0 Lacs
Ahmedabad, Gujarat, India
On-site
Job Description Data Architect – Databricks (Azure/AWS) Role Overview: We are seeking an experienced Data Architect specializing in Databricks to lead the architecture, design, and migration of enterprise data workloads from on-premises systems (e.g., Oracle, Exadata, Hadoop) to Databricks on Azure or AWS . The role involves designing scalable, secure, and high-performing data platforms based on the medallion architecture (bronze, silver, gold layers), supporting large-scale ingestion, transformation, and publishing of data. Required Skills and Experience: 8+ years of experience in data architecture or engineering roles, with at least 3+ years specializing in cloud-based big data solutions. Hands-on expertise with Databricks on Azure or AWS. Deep understanding of Delta Lake, medallion architecture (bronze/silver/gold zones), and data governance tools (e.g., Unity Catalog, Purview). Strong experience migrating large datasets and batch/streaming pipelines from on-prem to Databricks. Expertise with Spark (PySpark/Scala) at scale and optimizing Spark jobs. Familiarity with ingestion from RDBMS (Oracle, SQL Server) and legacy Hadoop ecosystems. Proficiency in orchestration tools (Databricks Workflows, Airflow, Azure Data Factory, AWS Glue Workflows). Strong understanding of cloud-native services for storage, compute, security, and networking. Preferred Qualifications: Databricks Certified Data Engineer or Architect. Azure/AWS cloud certifications. Experience with real-time/streaming ingestion (Kafka, Event Hubs, Kinesis). Familiarity with data quality frameworks (e.g., Deequ, Great Expectations). Responsibilities Key Responsibilities: Define and design cloud-native data architecture on Databricks using Delta Lake, Unity Catalog, and related services. Develop migration strategies for moving on-premises data workloads (Oracle, Hadoop, Exadata, etc.) to Databricks on Azure/AWS. Architect and oversee data pipelines supporting ingestion, curation, transformation, and analytics in a multi-layered (bronze/silver/gold) model. Lead data modeling, schema design, performance optimization, and data governance best practices. Collaborate with data engineering, platform, and security teams to build production-ready solutions. Create standards for ingestion frameworks, job orchestration (e.g., workflows, Airflow), and data quality validation. Support cost optimization, scalability design, and operational monitoring frameworks. Guide and mentor engineering teams during the build and migration phases. Attributes for Success: Ability to lead architecture discussions with technical and business stakeholders. Passion for modern cloud data architectures and continuous learning. Pragmatic and solution-driven approach to migrations. Diversity and Inclusion : An Oracle career can span industries, roles, Countries, and cultures, allowing you to flourish in new roles and innovate while blending work life in. Oracle has thrived through 40+ years of change by innovating and operating with integrity while delivering for the top companies in almost every industry. To nurture the talent that makes this happen, we are committed to an inclusive culture that celebrates and values diverse insights and perspectives, a workforce that inspires thought leadership and innovation. Oracle offers a highly competitive suite of Employee Benefits designed on the principles of parity, consistency, and affordability. The overall package includes certain core elements such as Medical, Life Insurance, access to Retirement Planning, and much more. We also encourage our employees to engage in the culture of giving back to the communities where we live and do business. At Oracle, we believe that innovation starts with diversity and inclusion and to create the future we need talent from various backgrounds, perspectives, and abilities. We ensure that individuals with disabilities are provided reasonable accommodation to successfully participate in the job application, and interview process, and in potential roles. To perform crucial job functions. That’s why we’re committed to creating a workforce where all individuals can do their best work. It’s when everyone’s voice is heard and valued that we’re inspired to go beyond what’s been done before. About Us As a world leader in cloud solutions, Oracle uses tomorrow’s technology to tackle today’s challenges. We’ve partnered with industry-leaders in almost every sector—and continue to thrive after 40+ years of change by operating with integrity. We know that true innovation starts when everyone is empowered to contribute. That’s why we’re committed to growing an inclusive workforce that promotes opportunities for all. Oracle careers open the door to global opportunities where work-life balance flourishes. We offer competitive benefits based on parity and consistency and support our people with flexible medical, life insurance, and retirement options. We also encourage employees to give back to their communities through our volunteer programs. We’re committed to including people with disabilities at all stages of the employment process. If you require accessibility assistance or accommodation for a disability at any point, let us know by emailing accommodation-request_mb@oracle.com or by calling +1 888 404 2494 in the United States. Oracle is an Equal Employment Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, sexual orientation, gender identity, disability and protected veterans’ status, or any other characteristic protected by law. Oracle will consider for employment qualified applicants with arrest and conviction records pursuant to applicable law.
Posted 1 week ago
6.0 years
0 Lacs
Ahmedabad, Gujarat, India
On-site
Job Description Senior Data Engineer – Databricks (Azure/AWS) Role Overview: We are looking for a hands-on Senior Data Engineer experienced in migrating and building large-scale data pipelines on Databricks using Azure or AWS platforms. The role will focus on implementing batch and streaming pipelines, applying the bronze-silver-gold data lakehouse model, and ensuring scalable and reliable data solutions. Required Skills and Experience: 6+ years of hands-on data engineering experience, with 2+ years specifically working on Databricks in Azure or AWS. Proficiency in building and optimizing Spark pipelines (batch and streaming). Strong experience implementing bronze/silver/gold data models. Working knowledge of cloud storage systems (ADLS, S3) and compute services. Experience migrating data from RDBMS (Oracle, SQL Server) or Hadoop ecosystems. Familiarity with Airflow, Azure Data Factory, or AWS Glue for orchestration. Good scripting skills (Python, Scala, SQL) and version control (Git). Preferred Qualifications: Databricks Certified Data Engineer Associate or Professional. Experience with Delta Live Tables (DLT) and Databricks SQL. Understanding of cloud security best practices (IAM roles, encryption, ACLs). Responsibilities Key Responsibilities: Design, develop, and operationalize scalable data pipelines on Databricks following medallion architecture principles. Migrate and transform large data volumes from traditional on-prem systems (Oracle, Hadoop, Exadata) into cloud data platforms. Develop efficient Spark (PySpark/Scala) jobs for ingestion, transformation, aggregation, and publishing of data. Implement data quality checks, error handling, retries, and data validation frameworks. Build automation scripts and CI/CD pipelines for Databricks workflows and deployment. Tune Spark jobs and optimize cost and performance in cloud environments. Collaborate with data architects, product owners, and analytics teams. Attributes for Success: Strong analytical and problem-solving skills. Attention to scalability, resilience, and cost efficiency. Collaborative attitude and passion for clean, maintainable code. Diversity and Inclusion: An Oracle career can span industries, roles, Countries, and cultures, allowing you to flourish in new roles and innovate while blending work life in. Oracle has thrived through 40+ years of change by innovating and operating with integrity while delivering for the top companies in almost every industry. To nurture the talent that makes this happen, we are committed to an inclusive culture that celebrates and values diverse insights and perspectives, a workforce that inspires thought leadership and innovation. Oracle offers a highly competitive suite of Employee Benefits designed on the principles of parity, consistency, and affordability. The overall package includes certain core elements such as Medical, Life Insurance, access to Retirement Planning, and much more. We also encourage our employees to engage in the culture of giving back to the communities where we live and do business.At Oracle, we believe that innovation starts with diversity and inclusion and to create the future we need talent from various backgrounds, perspectives, and abilities. We ensure that individuals with disabilities are provided reasonable accommodation to successfully participate in the job application, and interview process, and in potential roles. To perform crucial job functions. That’s why we’re committed to creating a workforce where all individuals can do their best work. It’s when everyone’s voice is heard and valued that we’re inspired to go beyond what’s been done before. About Us As a world leader in cloud solutions, Oracle uses tomorrow’s technology to tackle today’s challenges. We’ve partnered with industry-leaders in almost every sector—and continue to thrive after 40+ years of change by operating with integrity. We know that true innovation starts when everyone is empowered to contribute. That’s why we’re committed to growing an inclusive workforce that promotes opportunities for all. Oracle careers open the door to global opportunities where work-life balance flourishes. We offer competitive benefits based on parity and consistency and support our people with flexible medical, life insurance, and retirement options. We also encourage employees to give back to their communities through our volunteer programs. We’re committed to including people with disabilities at all stages of the employment process. If you require accessibility assistance or accommodation for a disability at any point, let us know by emailing accommodation-request_mb@oracle.com or by calling +1 888 404 2494 in the United States. Oracle is an Equal Employment Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, sexual orientation, gender identity, disability and protected veterans’ status, or any other characteristic protected by law. Oracle will consider for employment qualified applicants with arrest and conviction records pursuant to applicable law.
Posted 1 week ago
4.0 years
8 - 25 Lacs
Bengaluru, Karnataka, India
On-site
Job Description We are looking for a Data Scientist with strong AI/ML engineering skills to join our high-impact team at KrtrimaIQ Cognitive Solutions. This is not a notebook-only role — you must have production-grade experience deploying and scaling AI/ML models in cloud environments, especially GCP, AWS, or Azure. This role involves building, training, deploying, and maintaining ML models at scale, integrating them with business applications. Basic model prototyping won't qualify — we’re seeking hands-on expertise in building scalable machine learning pipelines. Key Responsibilities Design, train, test, and deploy end-to-end ML models on GCP (or AWS/Azure) to support product innovation and intelligent automation. Implement GenAI use cases using LLMs Perform complex data mining and apply statistical algorithms and ML techniques to derive actionable insights from large datasets. Drive the development of scalable frameworks for automated insight generation, predictive modeling, and recommendation systems. Work on impactful AI/ML use cases in Search & Personalization, SEO Optimization, Marketing Analytics, Supply Chain Forecasting, and Customer Experience. Implement real-time model deployment and monitoring using tools like Kubeflow, Vertex AI, Airflow, PySpark, etc. Collaborate with business and engineering teams to frame problems, identify data sources, build pipelines, and ensure production-readiness. Maintain deep expertise in cloud ML architecture, model scalability, and performance tuning. Stay up to date with AI trends, LLM integration, and modern practices in machine learning and deep learning. Technical Skills Required Core ML & AI Skills (Must-Have) Strong hands-on ML engineering (70% of the role) — supervised/unsupervised learning, clustering, regression, optimization. Experience with real-world model deployment and scaling, not just notebooks or prototypes. Good understanding of ML Ops, model lifecycle, and pipeline orchestration. Strong with Python 3, Pandas, NumPy, Scikit-learn, TensorFlow, PyTorch, Seaborn, Matplotlib, etc. SQL proficiency and experience querying large datasets. Deep understanding of linear algebra, probability/statistics, Big-O, and scientific experimentation. Cloud Experience In GCP (preferred), AWS, Or Azure. Cloud & Big Data Stack Hands-on Experience With GCP tools – Vertex AI, Kubeflow, BigQuery, GCS Or equivalent AWS/Azure ML stacks Familiar with Airflow, PySpark, or other pipeline orchestration tools. Experience reading/writing data from/to cloud services. Qualifications Bachelor's/Master’s/Ph.D. in Computer Science, Mathematics, Engineering, Data Science, Statistics, or related quantitative field. 4+ years of experience in data analytics and machine learning roles. 2+ years of experience in Python or similar programming languages (Java, Scala, Rust). Must have experience deploying and scaling ML models in production. Nice to Have Experience with LLM fine-tuning, Graph Algorithms, or custom deep learning architectures. Background in academic research to production applications. Building APIs and monitoring production ML models. Familiarity with advanced math – Graph Theory, PDEs, Optimization Theory. Communication & Collaboration Strong ability to explain complex models and insights to both technical and non-technical stakeholders. Ask the right questions, clarify objectives, and align analytics with business goals. Comfortable working cross-functionally in agile and collaborative teams. Important Note This is a Data Science-heavy role — 70% of responsibilities involve building, training, deploying, and scaling AI/ML models. Cloud Experience Is Mandatory (GCP Preferred, AWS/Azure Acceptable). Only candidates with hands-on experience in deploying ML models into production (not just notebooks) will be considered. Skills:- Machine Learning (ML), Production management, Large Language Models (LLM), AIML and Google Cloud Platform (GCP)
Posted 1 week ago
20.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Job Description Job Title: Senior Software Engineer Location: Bangalore Position Type: Full-time Position Level: 3 Who We Are Xactly is a leader in Sales Performance Management Solutions and a part of Vista Equity Partners portfolio companies since 2017. The Xactly Intelligent Revenue Platform helps businesses improve go-to-market outcomes through increased collaboration, greater efficiencies, and connecting data from all critical functions of the revenue lifecycle on a single platform. Born in the cloud almost 20 years ago, Xactly provides customers with extensive experience in solving the most challenging problems customers of all sizes face, backed by almost 20 years of proprietary data and award-winning AI. Named among the best workplaces in the U.S. by Great Place to Work six times, honored on FORTUNE Magazine’s inaugural list of the 100 Best Workplaces for Millennials, and chosen as the “Market Leader in Incentive Compensation” by CRM magazine. We’re building a culture of success and are looking for motivated professionals to join us! The Team The Xactly development team is full of smart engineers from top companies and universities and they execute quickly! In order to build and ship high quality products extremely fast, efficiently and in a continuous manner, Xactly engineers rely on their leaders to remove any obstacles and guide them through engineering practices. At Xactly, we build teams that are helpful, respect each other, maintain a high level of customer focus, inclusive of everyone and we strive for strong product ownership by the team. The Opportunity The ideal candidate will be extremely proficient and proven in the design and implementation of modern web application architectures. You must be strong in all aspects of mico-services, data access layers, API first design, single click deployment and technologies such as scala, NoSQL key-value data stores, hadoop , spark, chef and containers. Not only do we offer strong growth opportunities for top performers, but we also have a top-notch culture, benefits and more. Our strong C.A.R.E. values – Customer Focus, Accountability, Respect & Excellence – guide our every move, allowing us to be a leader in the incentive compensation & performance management market. We set the example with excellent customer experience and deliver an award winning SaaS (Software-as-a-Service) product!. Xactly, we believe everyone has a unique story to tell, and these small differences between us have a big impact. When bright, diverse minds come together, we’re challenged to think different ways, generate creative ideas, be more innovative, and take on new perspectives. Our customers come from different cultures and walks of life all around the world, and we believe our teams should reflect that to build strong and lasting relationships Required Skills Masters plus 5 years or bachelors plus 8 years experience in web application development and architecture. Extensive experience using open source software libraries Strong experience in at least one MVC architecture or application of the pattern Solid hands on experience with Java Strong experience with SQL ( Oracle, MySQL, Postgres) Strong experience in Springboot and REST Services Must have built end to end continuous integration and deployment infrastructure for micro services Strong commitment to good engineering discipline and process including code reviews and delivering unit tests in conjunction with feature delivery Must possess excellent communication and teamwork skills. Strong presentation and facilitation skills are required. Self-starter that is results focused with the ability to work independently and in teams. Good To Have Prior experience building modular, common and scalable services Experience using chef, puppet or other deployment automation tools Experience working within a distributed engineering team including offshore Bonus points if you have contributed to an open source project Familiarity and experience with agile (scrum) development process Proven track record of identifying and championing new technologies that enhance the end-user experience, software quality, and developer productivity WITHIN ONE MONTH, YOU’LL Become familiar with the code base, development processes, and deployments. Become familiar with the product as customers will use it. You may even have your first PR approved and in production. WITHIN THREE MONTHS, YOU’LL Become a contributor to the overall code base. Have PRs approved and deployed to production Contribute to design WITHIN SIX MONTHS, YOU’LL Working more autonomously and closer with product Helping troubleshoot issues Contribute new ideas to the product and development WITHIN TWELVE MONTHS, YOU’LL Become a UI expert for your project. Take full ownership of features and processes of the product Benefits and Perks Comprehensive Insurance Coverage Tuition Reimbursement XactlyFit Gym/Fitness Program Reimbursement Kitchen Stocked Daily with Tasty Snacks, Fruit and Drinks Free Parking and Subsidized Bus Pass (a go-green initiative!) About Xactly Corporation Xactly is a leading provider of enterprise-class, cloud-based, incentive compensation solutions for employee and sales performance management. We address a critical business need: To incentivize employees and align their behaviors with company goals. Our products allow organizations to make more strategic decisions, increase employee performance, improve margins, and mitigate risk. Our core values are key to our success, and each day we’re committed to upholding them by delivering the best we can to our customers. Xactly is proud to be an Equal Opportunity Employer. Xactly provides equal employment opportunities to all employees and applicants without regard to race, color, religion, sex, age, national origin, disability, veteran status, pregnancy, sexual orientation, or any other characteristic protected by law. This means we believe in celebrating diversity and creating an inclusive workplace environment, where everyone feels valued, heard, and has a sense of belonging. By doing this, everyone in the Xactly family has the power to make a difference and unleash their full potential. We do not accept resumes from agencies, headhunters, or other suppliers who have not signed a formal agreement with us.
Posted 1 week ago
10.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
By clicking the “Apply” button, I understand that my employment application process with Takeda will commence and that the information I provide in my application will be processed in line with Takeda’s Privacy Notice and Terms of Use. I further attest that all information I submit in my employment application is true to the best of my knowledge. Job Description: The Future Begins Here: At Takeda, we are leading digital evolution and global transformation. By building innovative solutions and future-ready capabilities, we are meeting the need of patients, our people, and the planet. Bengaluru, the city, which is India’s epicenter of Innovation, has been selected to be home to Takeda’s recently launched Innovation Capability Center. We invite you to join our digital transformation journey. In this role, you will have the opportunity to boost your skills and become the heart of an innovative engine that is contributing to global impact and improvement. At Takeda’s ICC we Unite in Diversity: Takeda is committed to creating an inclusive and collaborative workplace, where individuals are recognized for their backgrounds and abilities they bring to our company. We are continuously improving our collaborators journey in Takeda, and we welcome applications from all qualified candidates. Here, you will feel welcomed, respected, and valued as an important contributor to our diverse team. The Opportunity : As a Principal Data Engineer, you will be building and maintaining data systems and construct datasets that are easy to analyze and support Business Intelligence requirements as well as downstream systems. Responsibilities : Develops and maintains scalable data pipelines and builds out new integrations using AWS native technologies to support continuing increases in data source, volume, and complexity. Collaborates with analytics and business teams to improve data models that feed business intelligence tools and dashboards, increasing data accessibility and fostering data-driven decision making across the organization. Implements processes and systems to drive data reconciliation, monitor data quality, ensuring production data is always accurate and available for key stakeholders, downstream systems, and business processes that depend on it. Writes unit/integration/performance test scripts, contributes to engineering wiki, and documents work. Performs data analysis required to troubleshoot data related issues and assist in the resolution of data issues. Works closely with a team of data architects and data engineers, product managers, and analysts. Works closely with enterprise teams including Enterprise Architecture, Security, and Enterprise Data Backbone Engineering to design and develop data integration patterns/solutions along with proper data models supporting different data and analytics use cases. Works with DevOps and Cloud Center of Excellence to deploy data pipeline solutions in Takeda AWS environments meeting security and performance requirements. Skills and Qualifications : Required : Bachelors’ Degree, from an accredited institution in Engineering, Computer Science, or related field. 10+ years of experience in software, data, data warehouse, data lake, and analytics reporting development. Programming knowledge of Python / Scala / ... Big data experience skills Databricks or pySpark Strong experience in data/Big Data, data integration, data model, modern database (Graph, SQL, No-SQL, etc.) query languages and AWS cloud technologies including DMS, Lambda, Databricks, SQS, Step Functions, Data Streaming, Visualization, etc. Solid experience in DBA, dimensional modeling Informatica knowledge - IDMC Experience designing, building, maintaining data integrations using SOAP/REST web services/API, as well as schema design and dimensional data modeling. Excellent written and verbal communication skills including the ability to interact effectively with multifunctional team Preferred skills : Experience with GenAi, language models, LLMs and related libraries MLOps WHAT TAKEDA CAN OFFER YOU: Takeda is certified as a Top Employer, not only in India, but also globally. No investment we make pays greater dividends than taking good care of our people. At Takeda, you take the lead on building and shaping your own career. Joining the ICC in Bengaluru will give you access to high-end technology, continuous training and a diverse and inclusive network of colleagues who will support your career growth. BENEFITS: It is our priority to provide competitive compensation and a benefit package that bridges your personal life with your professional career. Amongst our benefits are Competitive Salary + Performance Annual Bonus Flexible work environment, including hybrid working Comprehensive Healthcare Insurance Plans for self, spouse, and children Group Term Life Insurance and Group Accident Insurance programs Employee Assistance Program Broad Variety of learning platforms Diversity, Equity, and Inclusion Programs Reimbursements – Home Internet & Mobile Phone Employee Referral Program Leaves – Paternity Leave (4 Weeks) , Maternity Leave (up to 26 weeks), Bereavement Leave (5 calendar days) ABOUT ICC IN TAKEDA: Takeda is leading a digital revolution. We’re not just transforming our company; we’re improving the lives of millions of patients who rely on our medicines every day. As an organization, we are committed to our cloud-driven business transformation and believe the ICCs are the catalysts of change for our global organization. Locations: IND - Bengaluru Worker Type: Employee Worker Sub-Type: Regular Time Type: Full time
Posted 1 week ago
0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Location : Bengaluru We are looking for an outstanding talent in an Individual contributor role to help build a new data lakehouse. Shown experience as a data engineer(IC, hands on role) or similar role, with a focus on cloud distributed data processing platform for spark, and modern open table concept like delta/iceberg. Solid experience with Azure: Synapse Analytics, Data Factory, Data Lake, Databricks, Microsoft Purview, Monitor, SQL Database, SQL Managed Instance, Stream Analytics, Cosmos DB, Storage Services, ADLS, Azure Functions, Log Analytics, Serverless Architecture, ARM Templates. Strong proficiency in Spark, SQL, and Python/scala/Java. Must have Skills : Python, Spark, Azure Data Factory, Azure Fabric, Azure Functions, ADLS, SQL, Azure SQL, Log Analytics Experience in building Lakehouse architecture using open-source table formats like delta, parquet and tools like Jupyter notebook. Strong notions of security standard methodologies (e.g., using Azure Key Vault, IAM, RBAC, Monitor etc.). Proficient in integrating, redefining, and consolidating data from various structured and unstructured data systems into a structure that is suitable for building analytics solutions. Understand the data through exploration, experience with processes related to data retention, validation, visualization, preparation, matching, fragmentation, segmentation, and improvement. Demonstrates ability to understand business requirements Agile development processes (SCRUM and Kanban) Good communication, presentation, documentation, and social skills Able to self-manage and work independently in a fast-paced environment with multifaceted requirements and priorities. LSEG is a leading global financial markets infrastructure and data provider. Our purpose is driving financial stability, empowering economies and enabling customers to create sustainable growth. Our purpose is the foundation on which our culture is built. Our values of Integrity, Partnership , Excellence and Change underpin our purpose and set the standard for everything we do, every day. They go to the heart of who we are and guide our decision making and everyday actions. Working with us means that you will be part of a dynamic organisation of 25,000 people across 65 countries. However, we will value your individuality and enable you to bring your true self to work so you can help enrich our diverse workforce. You will be part of a collaborative and creative culture where we encourage new ideas and are committed to sustainability across our global business. You will experience the critical role we have in helping to re-engineer the financial ecosystem to support and drive sustainable economic growth. Together, we are aiming to achieve this growth by accelerating the just transition to net zero, enabling growth of the green economy and creating inclusive economic opportunity. LSEG offers a range of tailored benefits and support, including healthcare, retirement planning, paid volunteering days and wellbeing initiatives. We are proud to be an equal opportunities employer. This means that we do not discriminate on the basis of anyone’s race, religion, colour, national origin, gender, sexual orientation, gender identity, gender expression, age, marital status, veteran status, pregnancy or disability, or any other basis protected under applicable law. Conforming with applicable law, we can reasonably accommodate applicants' and employees' religious practices and beliefs, as well as mental health or physical disability needs. Please take a moment to read this privacy notice carefully, as it describes what personal information London Stock Exchange Group (LSEG) (we) may hold about you, what it’s used for, and how it’s obtained, your rights and how to contact us as a data subject. If you are submitting as a Recruitment Agency Partner, it is essential and your responsibility to ensure that candidates applying to LSEG are aware of this privacy notice.
Posted 1 week ago
4.0 - 10.0 years
4 - 6 Lacs
Gurgaon
On-site
Join our Team About this opportunity The support engineer is a member of a team with high skills in supporting customers, 1st line engineers and 3rd part in advanced troubleshooting, fault isolation and remediation, to secure availability and fast resolution of the OSS/BSS product. You will be surrounded by people that are smart, passionate about cloud computing, and believe that world class support is critical to customer success. Every day will bring new and exciting challenges on the job while you: Learn and use groundbreaking technologies Apply advanced troubleshooting techniques to provide unique solutions to our customers' individual needs Interact with leading technologists around the world Work directly with Ericsson Product Development team to help reproduce and resolve customer issues Leverage your extensive customer support experience to provide feedback to internal Ericsson teams on how our customers use our services Drive customer communication during critical events What you will do Purpose - We are here to solve product issues for our customer 1. by improving our product long term 2. Proactivity in all we do! Role responsibilities Deal with customer support requests according to the defined process Provide support in detailed technical queries and solutions to source code level problems Create and conclude trouble reports and update it with recommended solutions towards the Design Maintenance team when identifying SW bugs. Be part of 24/7 emergency duty and support on critical cases Collect customer feedback and submit it to the R&D program to continue improving the product Continuously update the knowledge base and share knowledge within the organization Participate in FFI (First feature introduction) activities Provide on-site support when needed Be part of serviceability and service preparation activities. Required Skills Documented and proven knowledge in Cloud Native concepts, docker, Kubernetes, aws, Azure, GCP The awareness of product security, privacy and risk assessment Deep competence in troubleshooting and fault isolation using tools in complex it/telco systems Understanding, analyzing and troubleshooting code. Experience of scripting like bash, python, perl, ansible, cassandra scala (preferred) Ability to maintain a professional communication with customers/local companies, especially in critical situations Composure and readiness to work under high pressure from our customers/local companies while providing support You will bring Minimum of 4-10 years’ experience running services on Linux, Technical Support, Emergency Handling, customer ticket handling/ request handling. To qualify the candidate should have demonstrated key traits required - A very strong customer focus Ability to juggle many tasks and projects in a fast-moving environment Be a self-starter who is excited about technology. Good time management and multi-tasking capabilities Good teammate who is also comfortable working on own initiative Flexibility with working hours Familiarity with general business terms and processes Innovative & creative approach to problem solving coupled with advanced diagnostic & technical analysis skills Values of Perseverance, Professionalism, Respect & working with Integrity Education B Tech, M Tech, or similar experience in relevant area (SW development, telco business) Minimum 4 years of working experience in the Telecom (mandatory) area. Why join Ericsson? At Ericsson, you´ll have an outstanding opportunity. The chance to use your skills and imagination to push the boundaries of what´s possible. To build solutions never seen before to some of the world’s toughest problems. You´ll be challenged, but you won’t be alone. You´ll be joining a team of diverse innovators, all driven to go beyond the status quo to craft what comes next. What happens once you apply? Click Here to find all you need to know about what our typical hiring process looks like. Encouraging a diverse and inclusive organization is core to our values at Ericsson, that's why we champion it in everything we do. We truly believe that by collaborating with people with different experiences we drive innovation, which is essential for our future growth. We encourage people from all backgrounds to apply and realize their full potential as part of our Ericsson team. Ericsson is proud to be an Equal Opportunity Employer. learn more. Primary country and city: India (IN) || Gurgaon Req ID: 770584
Posted 1 week ago
3.0 years
4 - 6 Lacs
Hyderābād
On-site
Responsibilities: Develop quality focused test strategies that help the team deliver software that provides maximum quality without sacrificing business value. Test early and often. Work with multiple teams of engineers and product managers to gather requirements and data sources to design and build test plans including data quality validations. Participate in projects developed with agile methodology. Design and document test cases and test data to ensure proper coverage. Perform exploratory/manual tests as needed. Collaborate with software engineers to triage issues and work to ensure the validity, timeliness, consistency, completeness and accuracy of our data across all data platform components. Write, execute, and monitor automated test suites for integration and regression testing. Integrate tests as part of continuous delivery pipelines. Define quality metrics and build quality monitoring solutions and dashboards. Nurture a culture of quality through collaboration with teammates across the engineering function to make sure quality is embedded in both processes and technology. Mentor/coach team members to ensure appropriate testing coverage within the team with a focus on continuous testing and a shift-left approach. A suitable candidate would have: Minimum 3 years of testing experience working with applications developed in languages like, node.js, Python, Golang, Java. Solid experience in writing clear, concise, and comprehensive test plans and test cases. Experience in building automated test suites for API's REST and/or gRPC with focus on data validation. Experience in a programming language like, Python, Java, Golang, using it for automation API testing and web UI testing. Experience with UI frameworks like Selenium, Webdriver IO, cucumber, pytest . Experience with test case management tools like Testrail, and API testing tools like Postman Knowledge of data quality tools like Great Expectations, Deequ, etc. is desirable. Must understand databases and ORMs, experienced with at least one RDBMS and DB Query language. Highly experienced writing queries for data validation across different data sources and during the processing pipeline. Experience on modern Quality Engineering principles such as Continuous Testing and Shift Left. Good understanding of service oriented and microservices architecture. Experience with cloud environments like AWS, GCP, source control tools like Github and continuous integration and delivery software. Attitude to work in a fast-paced environment which values agility over talk. Great communication skills Experience testing fulfillment systems is a plus. Skill Set: Scala, GO, Python, Java, Testrail, Pytest, postman, test automation and continue delivery frameworks, Great Expectation framework, AWS, Github, Gitlab, Selenium, Node.JS, react.
Posted 1 week ago
0 years
0 Lacs
Delhi
Remote
ABOUT US Eskimi is a full-stack programmatic advertising platform capable of reaching 96% of the open web. Our platform allows us to plan, build, and execute high-performing advertising campaigns in over 162 markets. What sets us apart is our commitment to bringing premium creativity to the table in all aspects of our work, and leveraging innovative formats that help us bring the best outcome for advertising agencies and brands all over the world. How we do things at Eskimi is defined by our strong wish to grow, high sense of ownership, innovation and drive, and collaboration between our teams. With Eskimi team spread across 30+ countries and 5 continents, our global presence creates a dynamic environment, fostering diversity and inclusion. YOUR TEAM & YOUR MISSION Eskimi engineering department is a vibrant hub of creativity and innovation. It’s a team of curious engineers who navigate the complexities of adtech with persistence and ingenuity. We believe in empowering every team member to contribute unique perspectives. Collaboration is key. Whether brainstorming solutions or refining algorithms, we work together to excel. You will get to explore cutting-edge technologies and methodologies with us - from scalable systems to real-time bidding algorithms, there's always something new to learn. We value camaraderie and support within our team. Join us and be part of something extraordinary. As a Backend Developer, you’ll help shape our Ad Exchange by building features that directly impact performance and drive revenue. From fine-tuning real-time bidding to scaling systems that handle massive traffic, your work will be at the core of our platform. You’ll join a sharp, supportive team where learning and improving never stop. WHAT YOU’LL DO: Dive deep into the fast-paced world of Ad Tech Design and implement features that directly generate revenue Collaborate closely with a cross-functional team of engineers, product managers, and data analysts Work in an Agile environment with frequent iterations and releases Write backend services in Scala — no Scala experience? No worries! If you’re eager to learn and ready to dive in, we’ll support your transition Continuously improve and refactor the existing codebase to keep things clean, scalable, and robust WHAT WILL HELP YOU DO IT: Hands-on experience with modern programming languages like Java, C#, Go, or Scala Strong communication skills — you can clearly explain your thinking and understand what others need from your work Solid background with various storage systems: relational, in-memory, and NoSQL — and a good grasp of when to use what Familiarity with Git, testing practices, and system monitoring tools A “build it to last” mindset — you write code meant to survive updates, scale, and serve users long-term Comfortable working with remote, distributed teams across time zones WHAT’S IN IT FOR YOU: Flexible work arrangements, including hybrid work models in cities with physical offices, and remote work options everywhere else. Where we work in a hybrid model, our team-members can use Work Away Days, allowing them to work fully remotely for up to 1 month per year. We also have flexible working hours, with most Eskimians starting the day at 9 am in their local time zones. Professional development opportunities through programs like Leaders Assembly for managers, Mentorship programs for growing talents, regular learning sessions, and access to external consultants. Our Internship programs also serve as stepping stones for career starters, often leading to full-time roles within the team. Recognition culture with celebrations of achievements. We value everyone's contribution to bring the best talent and new clients to Eskimi, and we offer Bonus systems to encourage it. The Bonusly recognition system also highlights accomplishments, allowing team members to share recognition points redeemable for various gifts and vouchers. Additional perks such as private health insurance (location-dependent), volunteer days, as well as organized online and in-person get-togethers in office locations to foster meaningful connections among team members. Take a day off to celebrate your birthday! We believe that everyone deserves to unwind and enjoy their special day, so we provide an extra day off just for your birthday. JOIN US! Be a part of a fast-growing AdTech company and work with products that change the landscape of digital advertising around the globe. Let’s grow together! With us - not even the sky's the limit.
Posted 1 week ago
8.0 years
3 - 7 Lacs
Chennai
On-site
Sr. AI Developer This role has been designed as ‘Hybrid’ with an expectation that you will work on average 2 days per week from an HPE office. Who We Are: Hewlett Packard Enterprise is the global edge-to-cloud company advancing the way people live and work. We help companies connect, protect, analyze, and act on their data and applications wherever they live, from edge to cloud, so they can turn insights into outcomes at the speed required to thrive in today’s complex world. Our culture thrives on finding new and better ways to accelerate what’s next. We know varied backgrounds are valued and succeed here. We have the flexibility to manage our work and personal needs. We make bold moves, together, and are a force for good. If you are looking to stretch and grow your career our culture will embrace you. Open up opportunities with HPE. Job Description: Aruba is an HPE Company, and a leading provider of next-generation network access solutions for the mobile enterprise. Helping some of the largest companies in the world modernize their networks to meet the demands of a digital future, Aruba is redefining the “Intelligent Edge” – and creating new customer experiences across intelligent spaces and digital workspaces. Join us redefine what’s next for you. How you will make your mark… The ideal candidate will have experience working with AI technologies including LLMs/GenAI, and application development with to build and deploy AI Chat bot to support business management. Experience with MS Power Platform, Java and Databricks are preferred. What you’ll do: Responsibilities: As a Sr. AI Developer, the primary responsibility will be on full-stack development of AI Chat bot application for business management, integrating business-relevant data with LLMs, and helping the team deliver incremental features for on-demand AI-assisted analytics services on a hybrid tech stack. Translate business requirements into scalable and performant technical solutions. Design, code, test, and assure the quality of complex AI-powered product features. Partner with a highly motivated and talented set of colleagues. Be a motivated, self-starter who can operate with minimal handholding. Collaborate across teams and time zones, demonstrating flexibility and accountability. Education and Experience Required: 8-10+ years of Data Engineering & AI Development experience, with significant exposure to building AI Chat bots on a hybrid tech stack across SQL Server, Hadoop, Azure Data Factory and Databricks. Advanced university degree (e.g., Masters) or demonstrable equivalent. What you need to bring: Knowledge and Skills: Demonstrated ability to build or integrate AI-driven features into enterprise applications. Strong knowledge of Computer Science fundamentals. Experience with SQL databases and building SSIS packages; knowledge of NoSQL and event streaming (e.g., Kafka) is a bonus. Experience working with LLMs and generative AI frameworks (e.g., OpenAI, Hugging Face, etc.). Proficiency in MS Power Platform, Java, Scala, Python experience preferred. Experience with SAP software (e.g., SAP S/4HANA, SAP BW) is an asset. Proven track record of writing production-grade code for enterprise-scale systems. Knowledge of Agentic AI and frameworks Strong collaboration and communication skills. Experience using tools like JIRA for tracking tasks and bugs, with Agile CI/CD workflows. Strong domain experience across Sales, Finance or Operations with deep understanding of key KPIs & Metrics. Collaborates with senior managers/directors of the business on AI Chat bot, BI, Data Science and Analytics roadmap. Owns business requirements, prioritization & execution to deliver actionable insights to enable decision making, support strategic initiatives and accelerate profitable growth. Functions as the subject matter expert for data, analytics, and reporting systems within the organization to yield accurate and proper interpretation of core business KPIs/metrics. Performing deep-dive investigations, including applying advanced techniques, to solve some of the most critical and complex business problems in support of business transformation to enable Product, Support, and Software as a Service offerings. Additional Skills: Accountability, Accountability, Active Learning (Inactive), Active Listening, Bias, Business Decisions, Business Development, Business Metrics, Business Performance, Business Strategies, Calendar Management, Coaching, Computer Literacy, Creativity, Critical Thinking, Cross-Functional Teamwork, Design Thinking, Empathy, Follow-Through, Growth Mindset, Intellectual Curiosity (Inactive), Leadership, Long Term Planning, Managing Ambiguity, Personal Initiative {+ 5 more} What We Can Offer You: Health & Wellbeing We strive to provide our team members and their loved ones with a comprehensive suite of benefits that supports their physical, financial and emotional wellbeing. Personal & Professional Development We also invest in your career because the better you are, the better we all are. We have specific programs catered to helping you reach any career goals you have — whether you want to become a knowledge expert in your field or apply your skills to another division. Unconditional Inclusion We are unconditionally inclusive in the way we work and celebrate individual uniqueness. We know varied backgrounds are valued and succeed here. We have the flexibility to manage our work and personal needs. We make bold moves, together, and are a force for good. Let's Stay Connected: Follow @HPECareers on Instagram to see the latest on people, culture and tech at HPE. #india #aruba Job: Business Planning Job Level: Expert HPE is an Equal Employment Opportunity/ Veterans/Disabled/LGBT employer. We do not discriminate on the basis of race, gender, or any other protected category, and all decisions we make are made on the basis of qualifications, merit, and business need. Our goal is to be one global team that is representative of our customers, in an inclusive environment where we can continue to innovate and grow together. Please click here: Equal Employment Opportunity. Hewlett Packard Enterprise is EEO Protected Veteran/ Individual with Disabilities. HPE will comply with all applicable laws related to employer use of arrest and conviction records, including laws requiring employers to consider for employment qualified applicants with criminal histories.
Posted 1 week ago
5.0 years
0 Lacs
Chennai
On-site
DESCRIPTION About Amazon.com: Amazon.com strives to be Earth's most customer-centric company where people can find and discover virtually anything they want to buy online. By giving customers more of what they want - low prices, vast selection, and convenience - Amazon.com continues to grow and evolve as a world-class e-commerce platform. Amazon's evolution from Web site to e-commerce partner to development platform is driven by the spirit of innovation that is part of the company's DNA. The world's brightest technology minds come to Amazon.com to research and develop technology that improves the lives of shoppers and sellers around the world. About Team The RBS team is an integral part of Amazon online product lifecycle and buying operations. The team is designed to ensure Amazon remains competitive in the online retail space with the best price, wide selection and good product information. The team’s primary role is to create and enhance retail selection on the worldwide Amazon online catalog. The tasks handled by this group have a direct impact on customer buying decisions and online user experience. Overview of the role: An candidate will be a self-starter who is passionate about discovering and solving complicated problems, learning complex systems, working with numbers, and organizing and communicating data and reports. You will be detail-oriented and organized, capable of handling multiple projects at once, and capable of dealing with ambiguity and rapidly changing priorities. You will have expertise in process optimizations and systems thinking and will be required to engage directly with multiple internal teams to drive business projects/automation for the RBS team. Candidates must be successful both as individual contributors and in a team environment, and must be customer-centric. Our environment is fast-paced and requires someone who is flexible, detail-oriented, and comfortable working in a deadline-driven work environment. Responsibilities Include Works across team(s) and Ops organization at country, regional and/or cross regional level to drive improvements and enables to implement solutions for customer, cost savings in process workflow, systems configuration and performance metrics. Basic Qualifications Bachelor's degree in Computer Science, Information Technology, or a related field Proficiency in automation using Python Excellent oral and written communication skills Experience with SQL, ETL processes, or data transformation Preferred Qualifications Experience with scripting and automation tools Familiarity with Infrastructure as Code (IaC) tools such as AWS CDK Knowledge of AWS services such as SQS, SNS, CloudWatch and DynamoDB Understanding of DevOps practices, including CI/CD pipelines and monitoring solutions Understanding of cloud services, serverless architecture, and systems integration Key job responsibilities As a Business Intelligence Engineer in the team, you will collaborate closely with business partners, architect, design, implement, and BI projects & Automations. Responsibilities: Design, development and ongoing operations of scalable, performant data warehouse (Redshift) tables, data pipelines, reports and dashboards. Development of moderately to highly complex data processing jobs using appropriate technologies (e.g. SQL, Python, Spark, AWS Lambda, etc.) Development of dashboards and reports. Collaborating with stakeholders to understand business domains, requirements, and expectations. Additionally, working with owners of data source systems to understand capabilities and limitations. Deliver minimally to moderately complex data analysis; collaborating as needed with Data Science as complexity increases. Actively manage the timeline and deliverables of projects, anticipate risks and resolve issues. Adopt best practices in reporting and analysis: data integrity, test design, analysis, validation, and documentation. Internal job description Retail Business Service, ARTS is a growing team that supports the Retail Efficiency and Paid Services business and tech teams. There is ample growth opportunity in this role for someone who exhibits Ownership and Insist on the Highest Standards, and has strong engineering and operational best practices experience. Basic qualifications: 5+ years of relevant professional experience in business intelligence, analytics, statistics, data engineering, data science or related field. Experience with Data modeling, SQL, ETL, Data Warehousing and Data Lakes. Strong experience with engineering and operations best practices (version control, data quality/testing, monitoring, etc.) Expert-level SQL. Proficiency with one or more general purpose programming languages (e.g. Python, Java, Scala, etc.) Knowledge of AWS products such as Redshift, Quicksight, and Lambda. Excellent verbal/written communication & data presentation skills, including ability to succinctly summarize key findings and effectively communicate with both business and technical teams. Preferred qualifications: Experience with data-specific programming languages/packages such as R or Python Pandas. Experience with AWS solutions such as EC2, DynamoDB, S3, and EMR. Knowledge of machine learning techniques and concepts. BASIC QUALIFICATIONS 3+ years of analyzing and interpreting data with Redshift, Oracle, NoSQL etc. experience Experience with data visualization using Tableau, Quicksight, or similar tools Experience with data modeling, warehousing and building ETL pipelines Experience in Statistical Analysis packages such as R, SAS and Matlab Experience using SQL to pull data from a database or data warehouse and scripting experience (Python) to process data for modeling PREFERRED QUALIFICATIONS Experience with AWS solutions such as EC2, DynamoDB, S3, and Redshift Experience in data mining, ETL, etc. and using databases in a business environment with large-scale, complex datasets Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner.
Posted 1 week ago
7.0 years
3 - 6 Lacs
Chennai
On-site
Data Governance Engineer This role has been designed as ‘Hybrid’ with an expectation that you will work on average 2 days per week from an HPE office. Who We Are: Hewlett Packard Enterprise is the global edge-to-cloud company advancing the way people live and work. We help companies connect, protect, analyze, and act on their data and applications wherever they live, from edge to cloud, so they can turn insights into outcomes at the speed required to thrive in today’s complex world. Our culture thrives on finding new and better ways to accelerate what’s next. We know varied backgrounds are valued and succeed here. We have the flexibility to manage our work and personal needs. We make bold moves, together, and are a force for good. If you are looking to stretch and grow your career our culture will embrace you. Open up opportunities with HPE. Job Description: Aruba is an HPE Company, and a leading provider of next-generation network access solutions for the mobile enterprise. Helping some of the largest companies in the world modernize their networks to meet the demands of a digital future, Aruba is redefining the “Intelligent Edge” – and creating new customer experiences across intelligent spaces and digital workspaces. Join us redefine what’s next for you. How you will make your mark… The ideal candidate will have experience with deploying and managing enterprise-scale Data Governance practices along with Data Engineering experience developing the database layer to support and enable AI initiatives as well as streamlined user experience with Data Discovery, Security & Access Control, for meaningful & business-relevant analytics. The candidate will be comfortable with the full stack analytics ecosystem, with Database layer, BI dashboards, and AI/Data Science models & solutions, to effectively define and implement a scalable Data Governance practice. What you’ll do: Responsibilities: Drive the design and development of Data Dictionary, Lineage, Data Quality, Security & Access Control for Business-relevant data subjects & reports across business domains. Engage with the business users community to enable ease of Data Discovery and build trust in the data through Data Quality & Reliability monitoring with key metrics & SLAs defined. Supports the development and sustaining of Data subjects in the Database layer to enable BI dashboards and AI solutions. Drives the engagement and alignment with the HPE IT/CDO team on Governance initiatives, including partnering with functional teams across the business. Test, validate and assure the quality of complex AI-powered product features. Partner with a highly motivated and talented set of colleagues. Be a motivated, self-starter who can operate with minimal handholding. Collaborate across teams and time zones, demonstrating flexibility and accountability Education and Experience Required: 7+ years of Data Governance and Data Engineering experience, with significant exposure to enabling Data availability, data discovery, quality & reliability, with appropriate security & access controls in enterprise-scale ecosystem. First level university degree. What you need to bring: Knowledge and Skills: Experience working with Data governance & metadata management tools (Collibra, Databricks Unity Catalog, Atlan, etc.). Subject matter expertise of consent management concepts and tools. Demonstrated knowledge of research methodology and the ability to manage complex data requests. Excellent analytical thinking, technical analysis, and data manipulation skills. Proven track record of development of SQL SSIS packages with ETL flow. Experience with AI application deployment governance a plus. Technologies such as MS SQL Server, Databricks, Hadoop, SAP S4/HANA. Experience with SQL databases and building SSIS packages; knowledge of NoSQL and event streaming (e.g., Kafka) is a bonus. Exceptional interpersonal skills and written communication skills. Experience and comfort solving problems in an ambiguous environment where there is constant change. Ability to think logically, communicate clearly, and be well organized. Strong knowledge of Computer Science fundamentals. Experience working with LLMs and generative AI frameworks (e.g., OpenAI, Hugging Face, etc.). Proficiency in MS Power Platform, Java, Scala, Python experience preferred. Strong collaboration and communication skills. Performing deep-dive investigations, including applying advanced techniques, to solve some of the most critical and complex business problems in support of business transformation to enable Product, Support, and Software as a Service offerings. Strong business acumen and technical knowledge within area of responsibility. Strong project management skills Additional Skills: Accountability, Accountability, Active Learning (Inactive), Active Listening, Bias, Business Decisions, Business Development, Business Metrics, Business Performance, Business Strategies, Calendar Management, Coaching, Computer Literacy, Creativity, Critical Thinking, Cross-Functional Teamwork, Design Thinking, Empathy, Follow-Through, Growth Mindset, Intellectual Curiosity (Inactive), Leadership, Long Term Planning, Managing Ambiguity, Personal Initiative {+ 5 more} What We Can Offer You: Health & Wellbeing We strive to provide our team members and their loved ones with a comprehensive suite of benefits that supports their physical, financial and emotional wellbeing. Personal & Professional Development We also invest in your career because the better you are, the better we all are. We have specific programs catered to helping you reach any career goals you have — whether you want to become a knowledge expert in your field or apply your skills to another division. Unconditional Inclusion We are unconditionally inclusive in the way we work and celebrate individual uniqueness. We know varied backgrounds are valued and succeed here. We have the flexibility to manage our work and personal needs. We make bold moves, together, and are a force for good. Let's Stay Connected: Follow @HPECareers on Instagram to see the latest on people, culture and tech at HPE. #india #aruba Job: Business Planning Job Level: Specialist HPE is an Equal Employment Opportunity/ Veterans/Disabled/LGBT employer. We do not discriminate on the basis of race, gender, or any other protected category, and all decisions we make are made on the basis of qualifications, merit, and business need. Our goal is to be one global team that is representative of our customers, in an inclusive environment where we can continue to innovate and grow together. Please click here: Equal Employment Opportunity. Hewlett Packard Enterprise is EEO Protected Veteran/ Individual with Disabilities. HPE will comply with all applicable laws related to employer use of arrest and conviction records, including laws requiring employers to consider for employment qualified applicants with criminal histories.
Posted 1 week ago
5.0 years
2 - 3 Lacs
Chennai
On-site
This is a data engineer position - a programmer responsible for the design, development implementation and maintenance of data flow channels and data processing systems that support the collection, storage, batch and real-time processing, and analysis of information in a scalable, repeatable, and secure manner in coordination with the Data & Analytics team. The overall objective is defining optimal solutions to data collection, processing, and warehousing. Must be a Spark Java development expertise in big data processing, Python and Apache spark particularly within banking & finance domain. He/She designs, codes and tests data systems and works on implementing those into the internal infrastructure. Responsibilities: Ensuring high quality software development, with complete documentation and traceability Develop and optimize scalable Spark Java-based data pipelines for processing and analyzing large scale financial data Design and implement distributed computing solutions for risk modeling, pricing and regulatory compliance Ensure efficient data storage and retrieval using Big Data Implement best practices for spark performance tuning including partition, caching and memory management Maintain high code quality through testing, CI/CD pipelines and version control (Git, Jenkins) Work on batch processing frameworks for Market risk analytics Promoting unit/functional testing and code inspection processes Work with business stakeholders and Business Analysts to understand the requirements Work with other data scientists to understand and interpret complex datasets Qualifications: 5- 8 Years of experience in working in data eco systems. 4-5 years of hands-on experience in Hadoop , Scala , Java , Spark , Hive , Kafka, Impala, Unix Scripting and other Big data frameworks. 3+ years of experience with relational SQL and NoSQL databases: Oracle, MongoDB, HBase Strong proficiency in Python and Spark Java with knowledge of core spark concepts (RDDs, Dataframes, Spark Streaming, etc) and Scala and SQL Data Integration, Migration & Large Scale ETL experience (Common ETL platforms such as PySpark/DataStage/AbInitio etc.) - ETL design & build, handling, reconciliation and normalization Data Modeling experience (OLAP, OLTP, Logical/Physical Modeling, Normalization, knowledge on performance tuning) Experienced in working with large and multiple datasets and data warehouses Experience building and optimizing ‘big data’ data pipelines, architectures, and datasets. Strong analytic skills and experience working with unstructured datasets Ability to effectively use complex analytical, interpretive, and problem-solving techniques Experience with Confluent Kafka, Redhat JBPM, CI/CD build pipelines and toolchain – Git, BitBucket, Jira Experience with external cloud platform such as OpenShift, AWS & GCP Experience with container technologies (Docker, Pivotal Cloud Foundry) and supporting frameworks (Kubernetes, OpenShift, Mesos) Experienced in integrating search solution with middleware & distributed messaging - Kafka Highly effective interpersonal and communication skills with tech/non-tech stakeholders. Experienced in software development life cycle and good problem-solving skills. Excellent problem-solving skills and strong mathematical and analytical mindset Ability to work in a fast-paced financial environment Education: Bachelor’s/University degree or equivalent experience in computer science, engineering, or similar domain - Job Family Group: Technology - Job Family: Data Architecture - Time Type: Full time - Most Relevant Skills Please see the requirements listed above. - Other Relevant Skills For complementary skills, please see above and/or contact the recruiter. - Citi is an equal opportunity employer, and qualified candidates will receive consideration without regard to their race, color, religion, sex, sexual orientation, gender identity, national origin, disability, status as a protected veteran, or any other characteristic protected by law. If you are a person with a disability and need a reasonable accommodation to use our search tools and/or apply for a career opportunity review Accessibility at Citi. View Citi’s EEO Policy Statement and the Know Your Rights poster.
Posted 1 week ago
0 years
3 - 8 Lacs
Chennai
On-site
Date live: 08/01/2025 Business Area: Risk Finance and Treasury Area of Expertise: Technology Contract: Permanent Reference Code: JR-0000052993 Join us as a Technical Lead at Barclays, where you'll spearhead the evolution of our digital landscape, driving innovation and excellence. You'll harness cutting-edge technology to revolutionize our digital offerings, ensuring unapparelled customer experiences. To be successful as a Technical Lead you should have experience with: Hadoop Ecosystem Spark with Scala SQL and Shell Scription Tech-Team Lead experience Some other highly valued skills include: Devops Scala as Core language for Scala based Microservices Cloud Specific skills - preferably AWS. You may be assessed on key critical skills relevant for success in role, such as risk and controls, change and transformation, business acumen, strategic thinking and digital and technology, as well as job-specific technical skills. This role is based in Chennai. Purpose of the role To design, develop and improve software, utilising various engineering methodologies, that provides business, platform, and technology capabilities for our customers and colleagues. Accountabilities Development and delivery of high-quality software solutions by using industry aligned programming languages, frameworks, and tools. Ensuring that code is scalable, maintainable, and optimized for performance. Cross-functional collaboration with product managers, designers, and other engineers to define software requirements, devise solution strategies, and ensure seamless integration and alignment with business objectives. Collaboration with peers, participate in code reviews, and promote a culture of code quality and knowledge sharing. Stay informed of industry technology trends and innovations and actively contribute to the organization’s technology communities to foster a culture of technical excellence and growth. Adherence to secure coding practices to mitigate vulnerabilities, protect sensitive data, and ensure secure software solutions. Implementation of effective unit testing practices to ensure proper code design, readability, and reliability. Assistant Vice President Expectations Consult on complex issues; providing advice to People Leaders to support the resolution of escalated issues. Identify ways to mitigate risk and developing new policies/procedures in support of the control and governance agenda. Take ownership for managing risk and strengthening controls in relation to the work done. Perform work that is closely related to that of other areas, which requires understanding of how areas coordinate and contribute to the achievement of the objectives of the organisation sub-function. Collaborate with other areas of work, for business aligned support areas to keep up to speed with business activity and the business strategy. Engage in complex analysis of data from multiple sources of information, internal and external sources such as procedures and practises (in other areas, teams, companies, etc).to solve problems creatively and effectively. Communicate complex information. 'Complex' information could include sensitive information or information that is difficult to communicate because of its content or its audience. Influence or convince stakeholders to achieve outcomes. All colleagues will be expected to demonstrate the Barclays Values of Respect, Integrity, Service, Excellence and Stewardship – our moral compass, helping us do what we believe is right. They will also be expected to demonstrate the Barclays Mindset – to Empower, Challenge and Drive – the operating manual for how we behave.
Posted 1 week ago
3.0 years
4 - 8 Lacs
Chennai
On-site
DESCRIPTION As a Research Analyst, you'll collaborate with experts to develop ML models leveraging big data solutions and Large Language Models (LLMs) for business needs. You'll drive product pilots, demonstrating innovative thinking and customer focus. You'll build scalable solutions, write high-quality code, and develop state-of-the-art ML models. You'll coordinate between science and software teams, optimizing solutions. The role requires thriving in ambiguous, fast-paced environments and working independently with ML models. Key job responsibilities Collaborate with seasoned Applied Scientists and propose best in class ML solutions for business requirements Dive deep to drive product pilots, demonstrate think big and customer obsession LPs to steer the product roadmap Build scalable solutions in partnership with Applied Scientists by developing technical intuition to write high quality code and develop state of the art ML models utilizing most recent research breakthroughs in academia and industry Coordinate design efforts between Sciences and Software teams to deliver optimized solutions Ability to thrive in an ambiguous, uncertain and fast moving ML usecase developments. Familiar with ML models and work independent. Mentor Junior Research Analyst (RAs) and contribute to RA hiring About the team Retail Business Services Technology (RBS Tech) team develops the systems and science to accelerate Amazon’s flywheel. The team drives three core themes: 1) Find and Fix all customer and selling partner experience (CX and SPX) defects using technology, 2) Generate comprehensive insights for brand growth opportunities, and 3) Completely automate Stores tasks. Our vision for MLOE is to achieve ML operational excellence across Amazon through continuous innovation, scalable infrastructure, and a data-driven approach to optimize value, efficiency, and reliability. We focus on key areas for enhancing machine learning operations: a) Model Evaluation: Expanding LLM-based audit platform to support multilingual and multimodal auditing. Developing an LLM-powered testing framework for conversational systems to automate the validation of conversational flows, ensuring scalable, accurate, and efficient end-to-end testing. b) Guardrails: Building common guardrail APIs that teams can integrate to detect and prevent egregious errors, knowledge grounding issues, PII breaches, and biases. c) Deployment Framework support LLM deployments and seamlessly integrate it with our release management processes. BASIC QUALIFICATIONS • Bachelor's degree in Quantitative or STEM disciplines (Science, Technology, Engineering, Mathematics) • 3+ years of relevant work experience in solving real world business problems using machine learning, deep learning, data mining and statistical algorithms • Strong hands-on programming skills in Python, SQL, Hadoop/Hive. Additional knowledge of Spark, Scala, R, Java desired but not mandatory • Strong analytical thinking • Ability to creatively solve business problems, innovating new approaches where required and articulating ideas to a wide range of audiences using strong data, written and verbal communication skills • Ability to collaborate effectively across multiple teams and stakeholders, including development teams, product management and operations. PREFERRED QUALIFICATIONS • Master's degree with specialization in ML, NLP or Computer Vision preferred • 3+ years relevant work experience in a related field/s (project management, customer advocate, product owner, engineering, business analysis) - • Diverse experience will be favored eg. a mix of experience across different roles - In-depth understanding of machine learning concepts including developing models and tuning the hyper-parameters, as well as deploying models and building ML service - Technical expertise, experience in Data science, ML and Statistics Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner.
Posted 1 week ago
5.0 years
1 - 10 Lacs
Noida
On-site
Optum is a global organization that delivers care, aided by technology to help millions of people live healthier lives. The work you do with our team will directly improve health outcomes by connecting people with the care, pharmacy benefits, data and resources they need to feel their best. Here, you will find a culture guided by inclusion, talented peers, comprehensive benefits and career development opportunities. Come make an impact on the communities we serve as you help us advance health optimization on a global scale. Join us to start Caring. Connecting. Growing together. We are seeking a talented and motivated Data Engineer to join our growing data team. You will play a key role in building scalable data pipelines, optimizing data infrastructure, and enabling data-driven solutions. Primary Responsibilities: Design, develop, and maintain scalable ETL/ELT pipelines for batch and real-time data processing Build and optimize data models and data warehouses to support analytics and reporting Collaborate with analysts and software engineers to deliver high-quality data solutions Ensure data quality, integrity, and security across all systems Monitor and troubleshoot data pipelines and infrastructure for performance and reliability Contribute to internal tools and frameworks to improve data engineering workflows Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so Required Qualifications: 5+ years of experience working on commercially available software and / or healthcare platforms as a Data Engineer 3+ years of solid experience designing and building Enterprise Data solutions on cloud 1+ years of experience developing solutions hosted within public cloud providers such as Azure or AWS or private cloud/container-based systems using Kubernetes/OpenShift Experience with some of the modern relational databases Experience with Data warehousing services preferably Snowflake Experience in using modern software engineering and product development tools including Agile / SAFE, Continuous Integration, Continuous Delivery, DevOps etc. Solid experience of operating in a quickly changing environment and driving technological innovation to meet business requirements Skilled at optimizing SQL statements Subject matter expert on Cloud technologies preferably Azure and Big Data ecosystem Preferred Qualifications: Experience with real-time data streaming and event-driven architectures Experience building Big Data solutions on public cloud (Azure) Experience building data pipelines on Azure with skills Databricks spark, scala, Azure Data factory, Kafka and Kafka Streams, App services, Az Functions Experience developing RESTful Services in .NET, Java or any other language Experience with DevOps in Data engineering Experience with Microservices architecture Exposure to DevOps practices and infrastructure-as-code (e.g., Terraform, Docker) Knowledge of data governance and data lineage tools Ability to establish repeatable processes, best practices and implement version control software in a Cloud team environment At UnitedHealth Group, our mission is to help people live healthier lives and make the health system work better for everyone. We believe everyone-of every race, gender, sexuality, age, location and income-deserves the opportunity to live their healthiest life. Today, however, there are still far too many barriers to good health which are disproportionately experienced by people of color, historically marginalized groups and those with lower incomes. We are committed to mitigating our impact on the environment and enabling and delivering equitable care that addresses health disparities and improves health outcomes - an enterprise priority reflected in our mission.
Posted 1 week ago
0.0 - 1.0 years
1 - 1 Lacs
Noida
On-site
We are looking for a data engineer intern or trainee with below key skills: SQL Database tuning and performance Airflow implemented using Python or Scala Python and PySpark AWS Redshift or Snowflake or Databricks for data warehousing ETL services in AWS like EMR, GLUE, S3, Redshift or similar services in GCP or Azure This opening is for both freshers and experienced in the range 0-1 year. On the job training will be provided for freshers. Btech candidates with no prior IT experience can also apply. Job Types: Full-time, Permanent, Fresher Pay: ₹120,000.00 - ₹180,000.00 per year Benefits: Paid sick time Schedule: Monday to Friday Supplemental Pay: Performance bonus Experience: total work: 1 year (Preferred) Work Location: In person Expected Start Date: 04/08/2025
Posted 1 week ago
6.0 - 9.0 years
0 Lacs
Andhra Pradesh
On-site
Data Lead Engineer Exp : 6 - 9 years 1. Design, develop, and maintain efficient and reliable data pipelines using Java, Scala, Apache Spark and Confluent Cloud (Kafka, KStreams, kSQLDB, Schema Registry) 2. Leverage Apache Spark (Java/Scala) for large-scale data processing and transformation. 3. Experience with building, maintaining and debugging applications and data pipelines using Confluent Cloud (Kafka, KStreams, kSQLDB, Schema Registry). 4. Build and optimize data storage solutions using NoSQL databases such as ScyllaDB and/or Cassandra. 5. Experienced with AWS services required for Data Engineering such as EMR, ServerlessEMR, AWS Glue, CodeCommit, EC2, S3 etc. 6. Familiarity with workflow orchestration tools such as Airflow 7. Experience with building and deploying applications using Docker or AWS ECS or AWS EKS 8. Well versed with code management using tools like GitHub and CI/CD pipelines and deployment of data pipelines on AWS cloud. 9. Implement and manage search and analytics capabilities using AWS OpenSearch and/or Elasticsearch. 10. Collaborate with data scientists, analysts, and other engineers to understand data requirements and deliver effective solutions. 11. Monitor and troubleshoot data pipelines to ensure data quality and performance. 12. Implement data governance and data quality best practices. 13. Automate data ingestion, processing, and deployment processes. 14. Stay up-to-date with the latest data engineering trends and technologies. 15. Contribute to the design and architecture of our data platform on AWS. About Virtusa Teamwork, quality of life, professional and personal development: values that Virtusa is proud to embody. When you join us, you join a team of 27,000 people globally that cares about your growth — one that seeks to provide you with exciting projects, opportunities and work with state of the art technologies throughout your career with us. Great minds, great potential: it all comes together at Virtusa. We value collaboration and the team environment of our company, and seek to provide great minds with a dynamic place to nurture new ideas and foster excellence. Virtusa was founded on principles of equal opportunity for all, and so does not discriminate on the basis of race, religion, color, sex, gender identity, sexual orientation, age, non-disqualifying physical or mental disability, national origin, veteran status or any other basis covered by appropriate law. All employment is decided on the basis of qualifications, merit, and business need.
Posted 1 week ago
10.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Where Data Does More. Join the Snowflake team. We are looking for people who have a strong background in data science and cloud architecture to join our AI/ML Workload Services team to create exciting new offerings and capabilities for our customers! This team within the Professional Services group will be working with customers using Snowflake to expand their use of the Data Cloud to bring data science pipelines from ideation to deployment, and beyond using Snowflake's features and its extensive partner ecosystem. The role will be highly technical and hands-on, where you will be designing solutions based on requirements and coordinating with customer teams, and where needed Systems Integrators. AS A SOLUTIONS ARCHITECT - AI/ML AT SNOWFLAKE, YOU WILL: Be a technical expert on all aspects of Snowflake in relation to the AI/ML workload Build, deploy and ML pipelines using Snowflake features and/or Snowflake ecosystem partner tools based on customer requirements Work hands-on where needed using SQL, Python, Java and/or Scala to build POCs that demonstrate implementation techniques and best practices on Snowflake technology within the Data Science workload Follow best practices, including ensuring knowledge transfer so that customers are properly enabled and are able to extend the capabilities of Snowflake on their own Maintain deep understanding of competitive and complementary technologies and vendors within the AI/ML space, and how to position Snowflake in relation to them Work with System Integrator consultants at a deep technical level to successfully position and deploy Snowflake in customer environments Provide guidance on how to resolve customer-specific technical challenges Support other members of the Professional Services team develop their expertise Collaborate with Product Management, Engineering, and Marketing to continuously improve Snowflake’s products and marketing OUR IDEAL SOLUTION ARCHITECT - AI/ML WILL HAVE: Minimum 10 years experience working with customers in a pre-sales or post-sales technical role Skills presenting to both technical and executive audiences, whether impromptu on a whiteboard or using presentations and demos Thorough understanding of the complete Data Science life-cycle including feature engineering, model development, model deployment and model management. Strong understanding of MLOps, coupled with technologies and methodologies for deploying and monitoring models Experience and understanding of at least one public cloud platform (AWS, Azure or GCP) Experience with at least one Data Science tool such as AWS Sagemaker, AzureML, Dataiku, Datarobot, H2O, and Jupyter Notebooks Hands-on scripting experience with SQL and at least one of the following; Python, Java or Scala. Experience with libraries such as Pandas, PyTorch, TensorFlow, SciKit-Learn or similar University degree in computer science, engineering, mathematics or related fields, or equivalent experience BONUS POINTS FOR HAVING: Experience with Databricks/Apache Spark Experience implementing data pipelines using ETL tools Experience working in a Data Science role Proven success at enterprise software Vertical expertise in a core vertical such as FSI, Retail, Manufacturing etc Snowflake is growing fast, and we’re scaling our team to help enable and accelerate our growth. We are looking for people who share our values, challenge ordinary thinking, and push the pace of innovation while building a future for themselves and Snowflake. How do you want to make your impact? For jobs located in the United States, please visit the job posting on the Snowflake Careers Site for salary and benefits information: careers.snowflake.com
Posted 1 week ago
0 years
0 Lacs
Bengaluru, Karnataka, India
Remote
Syniverse is the world’s most connected company. Whether we’re developing the technology that enables intelligent cars to safely react to traffic changes or freeing travelers to explore by keeping their devices online wherever they go, we believe in leading the world forward. Which is why we work with some of the world’s most recognized brands. Eight of the top 10 banks. Four of the top 5 global technology companies. Over 900 communications providers. And how we’re able to provide our incredible talent with an innovative culture and great benefits. Who We're Looking For The Data Engineer I is an experienced data pipeline builder and data wrangler who enjoys optimizing data systems or building new solutions from ground up. This role will work with developers, architects, product managers and data analysts on data initiatives and ensure optimal data delivery with good performance and uptime metrics. Your behaviors align strongly with our values because ours do. Some Of What You'll Do Scope of the Role: Direct Reports: This is an individual contributor role with no direct reports Key Responsibilities Create, enhance, and maintain optimal data pipeline architecture and implementations. Analyze data sets to meet functional / non-functional business requirements. Identify, design, and implement data process: automating processes, optimizing data delivery, etc. Build infrastructure and tools to increase data ETL velocity. Work with data and analytics experts to implement and enhance analytic product features. Provide life cycle support the Operations team for existing products, services, and functionality assigned to the Data Engineering team. Experience, Education, And Certifications Bachelor’s degree in Computer Science, Statistics, Informatics or related field or equivalent work experience. Software Development experience desired Experience in Data Engineer fields is desired. Experience in building and optimizing big data pipelines, architectures, and data sets: Experience with big data tools: Hadoop, Spark, Kafka, etc. Experience with relational SQL databases, such as PostgreSQL, MySQL, etc. Experience with stream-processing systems: Flink, KSQL, Spark-Streaming, etc. Experience with programming languages, such as Java, Scala, Python, etc. Experience with cloud data engineering and development, such as AWS, etc. Additional Requirements Familiar with Agile software design processes and methodologies. Good analytic skills related to working with structured and unstructured datasets. Knowledge of message queuing, stream processing and scalable big data stores. Ownership/accountability for tasks/projects with on time and quality deliveries. Good verbal and written communication skills. Teamwork with independent design and development habits. Work with a sense of urgency and positive attitude. Why You Should Join Us Join us as we write a new chapter, guided by world-class leadership. Come be a part of an exciting and growing organization where we offer a competitive total compensation, flexible/remote work and with a leadership team committed to fostering an inclusive, collaborative, and transparent organizational culture. At Syniverse connectedness is at the core of our business. We believe diversity, equity, and inclusion among our employees is crucial to our success as a global company as we seek to recruit, develop, and retain the most talented people who want to help us connect the world. Know someone at Syniverse? Be sure to have them submit you as a referral prior to applying for this position.
Posted 1 week ago
6.0 years
0 Lacs
Bangalore Urban, Karnataka, India
On-site
Job Details: Experience - 6 to 12 yrs Mandatory Skills -Data Science, Gen AI, Python, RAG and Azure/AWS/GCP, AI/ML, NLP Location - Mumbai, Pune, Bangalore, Chennai, Hyderabad and Kolkata location. Notice - Immediate to 60 days Generic JD- Mandatory Skills - Data Science, Gen AI, Python, RAG and Azure/AWS/GCP, AI/ML, NLP Secondary - (Any) Machine Learning, Deep Learning, ChatGPT, Langchain, Prompt, vector stores, RAG, llama, Computer vision, Deep learning, Machine learning, OCR, Transformer, regression, forecasting, classification, hyper parameter tunning, MLOps, Inference, Model training, Model Deployment JD_ More than 6 years of experience in Data Engineering, Data Science and AI / ML domain Excellent understanding of machine learning techniques and algorithms, such as GPTs, CNN, RNN, k-NN, Naive Bayes, SVM, Decision Forests, etc. Experience using business intelligence tools (e.g. Tableau, PowerBI) and data frameworks (e.g. Hadoop) Experience in Cloud native skills. Knowledge of SQL and Python; familiarity with Scala, Java or C++ is an asset Analytical mind and business acumen and Strong math skills (e.g. statistics, algebra) Experience with common data science toolkits, such as TensorFlow, KERAs, PyTorch, PANDAs, Microsoft CNTK, NumPy etc. Deep expertise in at least one of these is highly desirable. Experience with NLP, NLG and Large Language Models like BERT, LLaMa, LaMDA, GPT, BLOOM, PaLM, DALL-E, etc. Great communication and presentation skills. Should have experience in working in a fast-paced team culture. Experience with AIML and Big Data technologies like AWS SageMaker, Azure Cognitive Services, Google Colab, Jupyter Notebook, Hadoop, PySpark, HIVE, AWS EMR etc. Experience with NoSQL databases, such as MongoDB, Cassandra, HBase, Vector databases Good understanding of applied statistics skills, such as distributions, statistical testing, regression, etc. Should be a data-oriented person with analytical mind and business acumen.
Posted 1 week ago
6.0 - 9.0 years
0 Lacs
Andhra Pradesh, India
On-site
Data Lead Engineer Exp : 6 - 9 years Design, develop, and maintain efficient and reliable data pipelines using Java, Scala, Apache Spark and Confluent Cloud (Kafka, KStreams, kSQLDB, Schema Registry) Leverage Apache Spark (Java/Scala) for large-scale data processing and transformation. Experience with building, maintaining and debugging applications and data pipelines using Confluent Cloud (Kafka, KStreams, kSQLDB, Schema Registry). Build and optimize data storage solutions using NoSQL databases such as ScyllaDB and/or Cassandra. Experienced with AWS services required for Data Engineering such as EMR, ServerlessEMR, AWS Glue, CodeCommit, EC2, S3 etc. Familiarity with workflow orchestration tools such as Airflow Experience with building and deploying applications using Docker or AWS ECS or AWS EKS Well versed with code management using tools like GitHub and CI/CD pipelines and deployment of data pipelines on AWS cloud. Implement and manage search and analytics capabilities using AWS OpenSearch and/or Elasticsearch. Collaborate with data scientists, analysts, and other engineers to understand data requirements and deliver effective solutions. Monitor and troubleshoot data pipelines to ensure data quality and performance. Implement data governance and data quality best practices. Automate data ingestion, processing, and deployment processes. Stay up-to-date with the latest data engineering trends and technologies. Contribute to the design and architecture of our data platform on AWS.
Posted 1 week ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
40005 Jobs | Dublin
Wipro
19416 Jobs | Bengaluru
Accenture in India
16187 Jobs | Dublin 2
EY
15356 Jobs | London
Uplers
11435 Jobs | Ahmedabad
Amazon
10613 Jobs | Seattle,WA
Oracle
9462 Jobs | Redwood City
IBM
9313 Jobs | Armonk
Accenture services Pvt Ltd
8087 Jobs |
Capgemini
7830 Jobs | Paris,France