Jobs
Interviews

32199 Docker Jobs - Page 50

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

3.0 - 7.0 years

0 Lacs

hyderabad, telangana

On-site

The successful candidate for this role will be reporting to the Sr. Full Stack Development Lead and will play a crucial role in executing and delivering well-tested code. You will have the opportunity to work on various projects, including core applications and customer-facing interfaces, as part of a larger geographically distant team. Collaboration and sharing insights will be essential in driving process improvements. Your responsibilities will include designing, building, and maintaining efficient, reusable, and tested code. You will drive the expansion of client Python and Ruby based webservices and engineer scalable and secure APIs from deprecated legacy code. Keeping up-to-date with new technologies and their application to solve challenges will be a key aspect of your role. Additionally, you will closely work with the client product team and contribute to an inclusive team environment where pairing and peer-reviewed code are valued. Qualifications for this position include: - 5+ years of software development experience and 3+ years of Python development experience - 1+ year of Ruby experience is preferred - 3+ years of experience with web frameworks (preferred: Rails or Rack, Django) - 1+ years of experience with Angular, React, or Vue.js - Demonstrated experience with AWS Services (preferred services: Lambda, SQS, S3) - Experience working in a software product-driven environment - Knowledge of front-end technologies such as JavaScript, HTML5, CSS3 - Familiarity with relational databases (e.g., MySQL, Postgres) - BS/MS degree in Computer Science or equivalent experience - Knowledge of version control systems like Git - Familiarity with Docker and testing libraries (e.g., rspec, pytest, jest) - Experience with Linters (e.g., RuboCop, Flake8, ESLint) In this role, you will work with Python, React, AWS, and Node technologies. The company, GlobalLogic, offers a culture of caring, learning and development opportunities, interesting and meaningful work, balance, flexibility, and is known for being a high-trust organization. GlobalLogic, a Hitachi Group Company, is a trusted digital engineering partner known for creating innovative digital products and experiences.,

Posted 1 day ago

Apply

8.0 - 12.0 years

0 Lacs

thiruvananthapuram, kerala

On-site

We are seeking an experienced Cloud Solution Architect with over 10 years of experience to create and implement scalable, secure, and cost-efficient cloud solutions across various platforms such as AWS, Azure, GCP, Akamai, and on-premise environments. Your role will involve driving cloud transformation and modernization initiatives, particularly in Akamai Cloud and complex cloud migrations. Collaboration with clients, product teams, developers, and operations will be essential to translate business requirements into effective architectural solutions aligned with industry standards and best practices. Your responsibilities will include designing and deploying robust and cost-effective cloud architectures, leading end-to-end cloud migration projects, evaluating and recommending cloud products and tools, as well as developing Infrastructure as Code (IaC) best practices utilizing Terraform, CloudFormation, or ARM templates. You will work closely with development, security, and DevOps teams to establish cloud-native CI/CD workflows, integrate compliance and security measures into deployment plans, and provide technical leadership and mentorship to engineering teams. Additionally, maintaining clear architectural documentation and implementation guides will be a crucial aspect of this role. To be considered for this position, you should hold a Bachelor's or Master's degree in Computer Science, Engineering, or a related field, along with at least 10 years of IT experience, with a minimum of 8 years in cloud architecture roles. Demonstrated expertise in cloud migrations and proficiency in one or more cloud platforms such as AWS, Azure, GCP, and Akamai are required. Strong knowledge of microservices, containerization (Docker, Kubernetes), cloud-native services (e.g., Lambda, Azure Functions, GKE), and IaC tools (Terraform, CloudFormation, ARM templates) is essential. A solid understanding of cloud networking, identity management, storage, security, as well as experience in cloud cost optimization and performance tuning are also necessary. Excellent English communication and presentation skills are expected in this role. Kindly review the complete Job Description and Requirements before submitting your application.,

Posted 1 day ago

Apply

6.0 - 10.0 years

0 Lacs

ahmedabad, gujarat

On-site

You will be responsible for leading a team of DevOps engineers in Ahmedabad. Your main duties will include managing and mentoring the team, overseeing the deployment and maintenance of various applications such as Odoo, Magento, and Node.js. You will also be in charge of designing and managing CI/CD pipelines using tools like Jenkins and GitLab CI, handling environment-specific configurations, and containerizing applications using Docker. In addition, you will need to implement and maintain Infrastructure as Code using tools like Terraform and Ansible, monitor application health and infrastructure, and ensure systems are secure, resilient, and compliant with industry standards. Collaboration with development, QA, and IT support teams is essential for seamless delivery, and troubleshooting performance, deployment, or scaling issues across tech stacks will also be part of your responsibilities. To be successful in this role, you should have at least 6 years of experience in DevOps/Cloud/System Engineering roles, with a minimum of 2 years managing or leading DevOps teams. Hands-on experience with Odoo, Magento, Node.js, and AWS/Azure/GCP infrastructure is required. Strong scripting skills in Bash, Python, PHP, or Node CLI, as well as a deep understanding of Linux system administration and networking fundamentals, are essential. Experience with Git, SSH, reverse proxies, and load balancers is also necessary, along with good communication skills and client management exposure. Preferred certifications that would be highly valued for this role include AWS Certified DevOps Engineer Professional, Azure DevOps Engineer Expert, and Google Cloud Professional DevOps Engineer. Bonus skills that are nice to have include experience with multi-region failover, HA clusters, MySQL/PostgreSQL optimization, GitOps, ArgoCD, Helm, VAPT 2.0, WCAG compliance, and infrastructure security best practices.,

Posted 1 day ago

Apply

5.0 - 9.0 years

0 Lacs

telangana

On-site

As a Full Stack Developer, you will be responsible for architecting, designing, and developing scalable NestJS microservices. Your role will involve developing and optimizing database schemas and queries in PostgreSQL. You will also be building and maintaining modern, responsive frontend interfaces using React.js, Next.js, or Angular based on the stack requirements. Collaboration with product managers, designers, and other developers will be essential in delivering high-quality features. Implementing unit and integration tests to ensure code quality, leading code reviews, mentoring junior developers, and guiding best practices will also be part of your responsibilities. Ensuring the security, performance, and scalability of the application will be crucial. Your expertise in NestJS and the Node.js ecosystem, along with solid experience in PostgreSQL, including query optimization and schema design, will be highly valued. A deep understanding of microservices architecture and message brokers such as Kafka, RabbitMQ, and NATS is essential. Experience with frontend frameworks like React.js, Next.js, or Angular is required. Knowledge of RESTful APIs, GraphQL, and authentication methods like JWT and OAuth is expected. Proficiency in using Docker and cloud platforms such as AWS, GCP, or Azure is necessary. Familiarity with testing frameworks like Jest and Mocha is beneficial. Excellent problem-solving and communication skills are also essential for this role. Nice to have skills include experience with event-driven architectures and CQRS, familiarity with TypeORM or Prisma, knowledge of Kubernetes or serverless functions, and exposure to DevOps practices and infrastructure as code using tools like Terraform.,

Posted 1 day ago

Apply

10.0 - 14.0 years

0 Lacs

thiruvananthapuram, kerala

On-site

The role requires you to drive the technical vision, establish scalable infrastructure, and lead engineering efforts to enable next-gen AI-driven learning experiences. You will be responsible for creating technology standards and practices, and ensuring their implementation across the organization. Your role will involve strategizing the use of technology, evaluating and implementing solutions to meet present and future requirements. It will be essential to regularly review code standards, monitor technology performance metrics, and oversee system design and architectural changes. Staying updated with emerging trends and best practices in technology will be crucial. You will be expected to demonstrate thought leadership, innovation, and creativity, working closely with cross-functional teams for project deliverables and enhancements while ensuring compliance with regulatory standards. To excel in this position, you should have prior experience as a Head of Technology, possessing a strong understanding of web and mobile architecture, scalable systems, and cloud-native solutions. Proficiency in technologies like Flutter, React JS, Node JS, Angular JS, MySQL, MongoDB, JavaScript, and Azure is essential. Experience in live video streaming, AI-based systems, and SaaS platforms will be an added advantage. Familiarity with DevOps practices, CI/CD pipelines, Docker, and Kubernetes is required. Hands-on experience in complex project management, advanced technological skills, and a proven track record in technology are essential. Strong team management, communication, interpersonal, and leadership skills will be critical for success in this role.,

Posted 1 day ago

Apply

5.0 years

0 Lacs

Greater Hyderabad Area

On-site

About Persistent We are an AI-led, platform-driven Digital Engineering and Enterprise Modernization partner, combining deep technical expertise and industry experience to help our clients anticipate what’s next. Our offerings and proven solutions create a unique competitive advantage for our clients by giving them the power to see beyond and rise above. We work with many industry-leading organizations across the world, including 12 of the 30 most innovative global companies, 60% of the largest banks in the US and India, and numerous innovators across the healthcare ecosystem. Our disruptor’s mindset, commitment to client success, and agility to thrive in the dynamic environment have enabled us to sustain our growth momentum by reporting $360.2M revenue in Q3 FY25, delivering 4.3% Q-o-Q and 19.9% Y-o-Y growth. Our 23,900+ global team members, located in 19 countries, have been instrumental in helping the market leaders transform their industries. We are also pleased to share that Persistent won in four categories at the prestigious 2024 ISG Star of Excellence™ Awards , including the Overall Award based on the voice of the customer. We were included in the Dow Jones Sustainability World Index, setting high standards in sustainability and corporate responsibility. We were awarded for our state-of-the-art learning and development initiatives at the 16th TISS LeapVault CLO Awards. In addition, we were cited as the fastest-growing IT services brand in the 2024 Brand Finance India 100 Report. Throughout our market-leading growth, we’ve maintained a strong employee satisfaction score of 8.2/10. About Position We are a leading technology-driven company committed to delivering innovative software solutions to our clients worldwide. With a focus on building robust and scalable applications, we are looking for a talented and motivated Python Developer to join our dynamic team. If you have a passion for coding and are looking to work on cutting-edge technologies, this is the perfect opportunity for you. Role: Python Developer Location: Bangalore/Hyderabad Experience: 5 to 8 Years Job Type: Full Time Employment What You'll Do 5 to 8 years of experience in Python development. Strong hands-on experience with microservices architecture and building scalable, distributed systems. Proficiency in building RESTful APIs with frameworks like Flask or FastAPI. Experience working with MongoDB, including designing schema and writing efficient queries. Expertise You'll Bring Solid understanding of version control (Git) and agile methodologies (Scrum/Kanban). Excellent problem-solving skills and ability to troubleshoot issues efficiently. Familiarity with cloud platforms (AWS, GCP, or Azure) and containerization (Docker) is a plus. Ability to work in a fast-paced environment with multiple priorities. Please provide the details in bullet points outlining what you expect from candidates. Design, Develop, and Maintain scalable, efficient, and secure microservices-based applications using Python. Implement RESTful APIs with frameworks like Flask or FastAPI. Collaborate with cross-functional teams to define software requirements and deliver high-quality solutions. Build and manage databases using MongoDB and ensure efficient querying and data storage practices. Ensure applications are robust, maintainable, and optimized for performance and scalability. Write unit tests and automate testing to ensure high-quality code. Participate in code reviews, maintain best practices, and follow coding standards. Troubleshoot, debug, and optimize applications for better performance. Benefits Competitive salary and benefits package Culture focused on talent development with quarterly promotion cycles and company-sponsored higher education and certifications Opportunity to work with cutting-edge technologies Employee engagement initiatives such as project parties, flexible work hours, and Long Service awards Annual health check-ups Insurance coverage: group term life, personal accident, and Mediclaim hospitalization for self, spouse, two children, and parents Inclusive Environment Persistent Ltd. is dedicated to fostering diversity and inclusion in the workplace. We invite applications from all qualified individuals, including those with disabilities, and regardless of gender or gender preference. We welcome diverse candidates from all backgrounds. We offer hybrid work options and flexible working hours to accommodate various needs and preferences. Our office is equipped with accessible facilities, including adjustable workstations, ergonomic chairs, and assistive technologies to support employees with physical disabilities. If you are a person with disabilities and have specific requirements, please inform us during the application process or at any time during your employment. We are committed to creating an inclusive environment where all employees can thrive. Our company fosters a value-driven and people-centric work environment that enables our employees to: Accelerate growth, both professionally and personally Impact the world in powerful, positive ways, using the latest technologies Enjoy collaborative innovation, with diversity and work-life wellbeing at the core Unlock global opportunities to work and learn with the industry’s best Let’s unleash your full potential at Persistent “Persistent is an Equal Opportunity Employer and prohibits discrimination and harassment of any kind.”

Posted 1 day ago

Apply

7.0 - 11.0 years

0 Lacs

chennai, tamil nadu

On-site

You are a highly skilled and experienced Node.js Team Lead with a strong technical background in Node.js and experience in leading a development team. You will be responsible for driving the backend development of an enterprise-grade SaaS platform, ensuring high performance, scalability, and security of the application, and mentoring junior developers. Your key responsibilities will include leading and mentoring a team of Node.js developers, providing technical guidance and support, planning and managing backend development projects, overseeing the design, development, and maintenance of server-side components using Node.js, leading the design and implementation of RESTful APIs, managing SQL and NoSQL databases, optimizing performance, implementing security best practices, collaborating with frontend developers and other stakeholders, ensuring thorough testing and quality assurance processes, staying updated with industry trends and technologies, and proactively suggesting and implementing improvements to the development process and architecture. You should possess a Bachelor's degree in Computer Science, Software Engineering, or a related field, with a minimum of 7 years of experience in backend development with Node.js, including at least 2 years in a team lead or supervisory role. Additionally, you should be proficient in Node.js, Express.js (With Type Script Experience), have a strong understanding of RESTful APIs and microservices architecture, experience with SQL and NoSQL databases, familiarity with AWS services and cloud architecture, and knowledge of containerization technologies like Docker. This is an immediate requirement, and candidates who can join at the earliest are preferred. If you meet the qualifications and have the required experience, please send your resumes to recruitment@novastrid.com.,

Posted 1 day ago

Apply

4.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Company Description Experian is a global data and technology company, powering opportunities for people and businesses around the world. We help to redefine lending practices, uncover and prevent fraud, simplify healthcare, create marketing solutions, all using our unique combination of data, analytics and software. We also assist millions of people to realize their financial goals and help them save time and money. We operate across a range of markets, from financial services to healthcare, automotive, agribusiness, insurance, and many more industry segments. We invest in people and new advanced technologies to unlock the power of data. As a FTSE 100 Index company listed on the London Stock Exchange (EXPN), we have a team of 22,500 people across 32 countries. Our corporate headquarters are in Dublin, Ireland. Learn more at experianplc.co m Job Description We are looking for an enthusiastic Test Engineer to work at the forefront of our cloud modernization, within our Credit & Verification Services. This is a hybrid role requiring travelling to Hyderabad office circa 40% days per month. You will be based in Hyderabad and reporting to Director. Design, develop, and execute manual and automated test scripts based on business requirements. Develop, maintain, and enhance automation frameworks for UI, API, and backend testing. Create and maintain test data, test environments, and automation scripts. Perform various types of testing: White Box testing Black Box testing Integration testing UI testing API Testing Collaborate closely with developers, product owners, and other stakeholders in an Agile environment. You will participate in code reviews, defect triaging, and continuous improvement discussions. Analyze test results, identify root causes, and provide detailed reports. Contribute to CI/CD pipeline integration with automated tests. Qualifications Qualification: Bachelor's degree in computer science, Engineering, or related field 4 years of hands-on experience in a quality engineering, software testing, or test automation. Solid understanding of SDLC, STLC, and Agile/Scrum processes. Passion for quality, learning new technologies, and continuous improvement. Required Technical Skills & Knowledge Hands-on experience with UI, API automation tools like Selenium, Cypress, Playwright, REST-assured or Postman. Proficiency in Behavior-Driven Development (BDD) and Test-Driven Development (TDD) approaches Working knowledge of SQL for data validation. Experience in at least one programming/scripting language: Java, Scala, Python, or JavaScript. Basic understanding of Containers (Docker) and CI/CD processes. Experience with version control systems like Git, Bitbucket, GitHub or Gitlab. Working knowledge of API testing, backend validations, and functional automation. Clear understanding of defect lifecycle, reporting, and issue tracking tools like Jira. Strong collaboration skills and a mindset for quality ownership Desirable And Useful Skills Exposure to AWS services like S3, Lambda, Step Functions, Glue or similar. Understanding of Agile delivery frameworks – Scrum or Kanban Knowledge of Performance Testing tools Basic knowledge of Big Data Testing Additional Information Our uniqueness is that we celebrate yours. Experian's culture and people are important differentiators. We take our people agenda very seriously and focus on what matters; DEI, work/life balance, development, authenticity, collaboration, wellness, reward & recognition, volunteering... the list goes on. Experian's people first approach is award-winning; World's Best Workplaces™ 2024 (Fortune Global Top 25), Great Place To Work™ in 24 countries, and Glassdoor Best Places to Work 2024 to name a few. Check out Experian Life on social or our Careers Site and Glassdoor to understand why. Experian is proud to be an Equal Opportunity and Affirmative Action employer. Innovation is a critical part of Experian's DNA and practices, and our diverse workforce drives our success. Everyone can succeed at Experian and bring their whole self to work, irrespective of their gender, ethnicity, religion, color, sexuality, physical ability or age. If you have a disability or special need that requires accommodation, please let us know at the earliest opportunity. Benefits Experian care for employee's work life balance, health, safety and wellbeing. In support of this endeavor, we offer best-in-class family well-being benefits, enhanced medical benefits and paid time off. Experian Careers - Creating a better tomorrow together Find out what its like to work for Experian by clicking here

Posted 1 day ago

Apply

8.0 - 12.0 years

0 Lacs

karnataka

On-site

The purpose of the role is to design, test, and maintain software programs for operating systems or applications that need to be deployed at a client end while ensuring they meet 100% quality assurance parameters. You should have at least 8 years of tech experience, preferably with exposure to capital markets front office trading. As an excellent Java developer, you should possess good software design principles, the ability to write robust code, and create accompanying test suites. It is essential to write efficient and clear code, with the capability to explain what has been implemented and why. Proficiency in Core Java, Data structures, Algorithms, concurrency, and multi-threading is required. Additionally, you should be strong in Spring boot, microservices, messaging (kafka/solace), and RDBMS (Oracle or any other RDBMS). Expertise in CI/CD (Azure Devops preferred), Docker/Openshift, and ELK stack is necessary. Ideally, you should also have full-stack skills with knowledge of React.JS and Typescript to contribute to UI development when necessary. Agile experience is a must, and fluency in English with good communication skills is essential. The role requires the ability to work independently with minimal supervision, a sense of ownership, and the capability to handle ambiguity. Furthermore, you should be comfortable working in a global environment and interacting with stakeholders outside India. Join us at Wipro to be a part of a modern organization that is an end-to-end digital transformation partner with ambitious goals. We are looking for individuals who are inspired by reinvention, be it of themselves, their careers, or their skills. At Wipro, we encourage constant evolution in our business and industry, adapting to the changing world around us. Be part of a purpose-driven business that empowers you to design your reinvention. Realize your ambitions at Wipro, where applications from people with disabilities are explicitly welcome.,

Posted 1 day ago

Apply

0.0 - 4.0 years

0 Lacs

karnataka

On-site

As an intern at Dwata Tech, you will have the opportunity to work on a variety of backend development tasks using Node.js. Your day-to-day responsibilities will include assisting in the development of backend applications, working with REST APIs, and integrating with databases such as PostgreSQL. You will also have the chance to learn and contribute to cloud-based deployments on AWS, participate in debugging and resolving backend issues, and document code and technical processes clearly. Collaboration with the frontend and QA teams will be a key aspect of your role. To excel in this internship, you should have a basic knowledge of Node.js and JavaScript, be familiar with REST APIs, and possess an understanding of relational databases, preferably PostgreSQL. A willingness to learn new technologies like Kafka and AWS, along with good communication and problem-solving skills, will be beneficial for your growth in this role. During your internship at Dwata Tech, you will gain valuable real-time development experience working on production-level systems. You will receive guidance from experienced backend developers, and upon successful completion, you will be awarded an internship certificate with the potential for a Pre-Placement Offer (PPO). Additionally, you will be exposed to tools such as Kafka, AWS, and Docker, enhancing your skill set and knowledge in the field. Dwata Tech is a tech-intensive company founded by individuals with over 20 years of combined experience from renowned companies like Google, Motorola, Samsung, and successful start-ups. Our team consists of highly skilled engineers dedicated to product development, utilizing cutting-edge technology in custom product development, data engineering, and cybersecurity. We collaborate with start-ups to provide extended engineering support, catering to clients in Israel, the US, and India. Our team is focused on delivering outcomes, providing you with ample opportunities for learning and exploration during your long-term engagement with us.,

Posted 1 day ago

Apply

5.0 - 9.0 years

0 Lacs

pune, maharashtra

On-site

The Lead Scala Developer role based in Pune requires you to have a minimum of 4-8 years of experience and be available to join within 15 days. As a Lead Scala Developer, you will be responsible for developing and maintaining scalable microservices using Scala, Akka, and/or LAGOM. You will work on building containerized applications with Docker and orchestrating them using Kubernetes, manage real-time messaging with Apache Pulsar, integrate databases with Slick Connector and PostgreSQL, enable search and analytics features with ElasticSearch, and streamline deployment workflows using GitLab CI/CD pipelines. Collaboration with cross-functional teams and writing clean, well-structured, and maintainable code are essential aspects of this role. The ideal candidate will have a strong understanding of microservices architecture, distributed systems, and possess expertise in Akka or LAGOM frameworks. Proficiency in Docker, Kubernetes, Apache Pulsar, PostgreSQL, ElasticSearch, GitLab, CI/CD pipelines, and deployment processes is required. Additionally, hands-on experience with Kafka or RabbitMQ, monitoring and logging tools such as Prometheus, Grafana, ELK stack, frontend frameworks like React or Angular, and cloud platforms like AWS, GCP, or Azure are considered advantageous. Candidates with a Bachelor's or Master's degree in Computer Science, Engineering, or a related technical field are preferred for this role. Joining this opportunity will expose you to working on high-performance systems with modern architecture, being part of a growth-oriented environment, access to cutting-edge tools and learning resources, and opportunities for long-term growth, upskilling, and mentorship. A healthy work-life balance with onsite amenities and team events is also emphasized. If you possess the required skills and experience in Scala development, microservices architecture, and distributed systems, and are looking to work on challenging projects with a collaborative team, this Lead Scala Developer role could be the next step in your career.,

Posted 1 day ago

Apply

5.0 - 8.0 years

0 Lacs

Chandigarh, India

On-site

Skills: AI/ML, Machine Learning, TensorFlow, CI/Cd, AWS, devop, azure, Job Description Minimum 5 - 8 years of experience in Data Science and Machine Learning. In-depth knowledge of machine learning, Deep learning in Computer Vision (CV), and generative AI techniques. Proficiency in programming languages such as Python and frameworks like TensorFlow or PyTorch. Strong understanding of NLP techniques and frameworks such as BERT, GPT, or Transformer models. Experience with cloud platforms such as Azure, AWS, or GCP and deploying AI solutions in a cloud environment. Expertise in data engineering, including data curation, cleaning, and preprocessing. Knowledge of trusted AI practices, like fairness, transparency, and accountability in AI models and systems. Strong collaboration with engineering teams to ensure seamless integration and deployment of AI models. Excellent problem-solving and analytical skills, with the ability to translate business requirements into technical solutions. Strong communication and interpersonal skills, with the ability to collaborate effectively with stakeholders at various levels. Understanding of data privacy, security, and ethical considerations in AI applications. Track record of driving innovation and staying updated with the lates Mandatory Skills Generative AI techniques, NLP techniques, BERT, GPT, or Transformer models Azure Open AI GPT models, Hugging Face Transformers, Prompt Engineering Python, Knowledge on frameworks like TensorFlow or PyTorch, LangChain, R Deploying AI solutions in Azure, AWS, or GCP. Deep learning in Computer Vision (CV), Large Language Model (LLM) Good To Have Skills Knowledge on DevOps and MLOps practices Implement CI/CD pipelines Utilize tools such as Docker, Kubernetes, and Git to build and manage AI pipelines.

Posted 1 day ago

Apply

5.0 - 9.0 years

0 Lacs

maharashtra

On-site

As a Consultant in the field of .NET React, you will be responsible for leveraging your expertise to deliver efficient and innovative solutions to our clients. Your role will involve understanding our customers" businesses thoroughly before delving into the technological aspects. By following this approach, you will contribute to the creation of intelligent software that optimizes existing processes, drives cost savings, and facilitates positive transformations. Working alongside a global team of over 2,000 professionals, you will play a key part in enhancing essential services across various sectors. Our technology has already made significant impacts, from assisting the NHS in screening millions of babies for hearing loss to supporting housing providers in managing their properties effectively. Moreover, our solutions aid officers in multiple police forces worldwide in making informed decisions at the frontline. To excel in this role, you should have at least 5 years of experience as a Full Stack Developer, with a minimum of 2 years dedicated to building frontend applications using React. Your proficiency in .NET technologies such as C# and ASP.NET Core, coupled with a strong command of React (JavaScript, TypeScript), will be crucial for your success. Moreover, familiarity with the Material-UI library for crafting responsive and visually appealing user interfaces is desired. Your responsibilities will also include Web API development and integration, along with knowledge of Docker containerization for seamless application deployment and scaling. A solid grasp of SOLID principles, Test-Driven Development (TDD), and Agile methodologies will be essential for delivering high-quality solutions. Experience with DevOps practices, particularly Azure DevOps, will be advantageous. Furthermore, your expertise in secure coding practices, problem-solving abilities, and analytical skills will be put to good use in this role. Effective communication and collaboration skills are also vital for engaging with various stakeholders and ensuring successful project outcomes. If you are looking for a challenging opportunity to work in a dynamic environment that encourages growth and innovation, we invite you to join our team. With the backing of NEC Corporation, a recognized leader in IT and network technologies, you will have endless possibilities for advancement and exploration in your career.,

Posted 1 day ago

Apply

8.0 - 12.0 years

0 Lacs

Mumbai Metropolitan Region

On-site

Skills: Kubernetes cluster, Kubernetes, CI/CD, GCP, OpenShift, Red Hat Linux, Role - K8 Expert & Architect Education - B.E./B.Tech/MCA in Computer Science Experience - 8-12 Years Location - Mumbai/Bangalore/Gurgaon Mandatory Skills (Docker And Kubernetes) Should have good understanding of various components of Kubernetes cluster Should have hands on experience of provisioning of Kubernetes cluster. Should have expertise on managing and upgradation Kubernetes Cluster / Redhat Open shift platform. Should have good experience of Container storage Should have good experience on CICD workflow (Preferable Azure DevOps, Ansible and Jenkin). Should have hands on experience of linux operating system administration. Should have understanding of Cloud Infrastructure preferably Vmware Cloud. Should have good understanding of application life cycle management on container platform Should have basis understanding of cloud networks and container networks. Should have good understanding of Helm and Helm Charts. Should be good in performance optimization of container platform. Should have good understanding of container monitoring tools like Prometheus, Grafana and ELK . Should be able to handle Severity#1 and Severity#2 incidents. Good communication skills. Should have capability to provide the support. Should have analytical and problem-solving capabilities, ability to work with teams. Should have experience on 24*7 operation support framework). Should have knowledge of ITIL Process Preferred Skills/Knowledge - Container Platforms - Docker, CRI/O, Kubernetes and OpenShift . Automation Platforms - Shell Scripts, Ansible, Jenkin. Cloud Platforms - GCP/AZURE/OpenStack Operating System - Linux/CentOS/Ubuntu container Storage and Backup Desired Skills Certified Redhat OpenShift Administrator Certification of administration of any Cloud Platform will be an added advantage Soft Skills Must have good troubleshooting skills Must be ready to learn new technologies and acquire new skills. Must be a Team Player. Should be good in Spoken and Written English

Posted 1 day ago

Apply

3.0 - 7.0 years

0 Lacs

coimbatore, tamil nadu

On-site

As a Full Stack AI Developer, you will be responsible for designing, developing, and maintaining AI-powered full stack applications. Your key responsibilities will include building and optimizing REST APIs using FastAPI/Flask, integrating advanced LLM capabilities, creating backend services, implementing Generative AI features, developing front-end interfaces with Streamlit or React, and applying best practices in software engineering. You will also work with cloud platforms like AWS, Azure, and GCP to monitor AI systems in production environments. Required Technical Skills: Core Python Development: Proficiency in Python 3, including syntax, control flow, data structures, functions, error handling, file I/O, modules, decorators, list/dict comprehensions, and exception management. Scientific & Data Libraries: NumPy for array operations, broadcasting; Pandas for data manipulation, aggregation, data cleaning; Database Integration with SQL, SQLite, MongoDB, Postgres, vector DBs (Chroma, PGVector, FIZZ). API Development: RESTful API design with FastAPI or Flask, async/await, dependency injection, request validation, JWT/OAuth2 auth, vector DB integration, and streaming LLM responses. Generative AI / LLMs: Understanding of Transformer architectures, LLMs (GPT, BERT, LLaMA), LangChain, LlamaIndex, Hugging Face Transformers, prompt engineering, parameter tuning, RAG, and agent-based workflows. Deployment & Containerization: Proficiency in Docker, Dockerfiles, Docker Compose, CI/CD pipelines, GitHub Actions, and deploying containerized services. Frontend Development: HTML5, CSS3, modern JavaScript (ES6+), Streamlit, React.js (with TypeScript), building conversational UIs, integrating streaming outputs via WebSockets/SSE. Desirable Skills: Experience with cloud services (AWS, Azure, GCP), monitoring and logging for AI systems, testing frameworks like pytest, httpx, React Testing Library, PDF extraction, OCR tools, frontend accessibility, and performance optimization. Soft Skills: Strong communication and documentation skills, problem-solving mindset, ability to collaborate across cross-functional teams. Educational Qualification: Bachelors or Masters degree in Computer Science, Data Science, AI/ML, or related field. Certifications or hands-on experience in Generative AI/LLMs is a plus. In summary, this role is ideal for individuals passionate about developing end-to-end AI applications using GenAI products, LLMs, APIs, Vector Databases, and modern front-end stacks in a production-ready environment.,

Posted 1 day ago

Apply

13.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Optum is a global organization that delivers care, aided by technology to help millions of people live healthier lives. The work you do with our team will directly improve health outcomes by connecting people with the care, pharmacy benefits, data and resources they need to feel their best. Here, you will find a culture guided by inclusion, talented peers, comprehensive benefits and career development opportunities. Come make an impact on the communities we serve as you help us advance health optimization on a global scale. Join us to start Caring. Connecting. Growing together. Optum is seeking an accomplished Senior Java/Cloud Architect to drive the design, development, and deployment of advanced healthcare software solutions. In this strategic role, you will provide technical leadership and architectural vision, collaborating with product owners, architects, and cross-functional teams to deliver scalable, secure, and compliant healthcare applications. Primary Responsibilities Technical Leadership Provide architectural direction across backend, frontend, and cloud domains Communicate architecture vision and strategy across organizational levels to ensure alignment and buy-in Offer technical guidance to improve system performance, reliability, and reusability within project constraints Backend Development Lead the design of robust backend systems supporting critical healthcare functions such as EHR integration, claims processing, and patient management Develop and maintain microservices architectures using Java/J2EE; design APIs (REST/gRPC/GraphQL) for large-scale applications Optimize backend services for high performance and scalability in compliance with industry standards (e.g., HIPAA) Manage data efficiently using relational and NoSQL databases. Cloud Infrastructure Architect cloud-native solutions on Azure with emphasis on scalability, high availability, disaster recovery, and security Utilize cloud services such as serverless computing (Azure Functions), containerization (Docker/Kubernetes), and managed databases to maximize system reliability Implement Infrastructure as Code (IaC) via tools like Terraform or CloudFormation System Design & Architecture Partner with technical leaders to define platform architectures - including technology stacks, frameworks, data storage - and develop technical roadmaps aligned with business goals Conduct system-level design reviews ensuring adherence to privacy regulations (e.g., HIPAA), interoperability standards, and best practices Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so Required Qualifications Bachelor’s degree or equivalent experience 13+ years in software development/architecture 8+ years designing APIs/Web Services (REST architecture) 5+ years leading application modernization/digital transformation initiatives Hands-on experience in Azure cloud infrastructure & architecture Experience with both relational & NoSQL databases Expertise in Java/J2EE; cloud-native development using modern stacks including Angular/Kafka Java/J2EE background, Cloud native development using modern technology stacks like Java, Angular, Kafka/ Cloud Infrastructure & Architecture and hands-on experience in Azure Proven ability to create concise technical presentations/materials Solid knowledge of healthcare domain requirements/compliance Preferred Qualifications Frontend development experience (Angular or JavaScript) Experience developing single-page applications/micro frontends or content management Exposure to AI/ML technologies At UnitedHealth Group, our mission is to help people live healthier lives and make the health system work better for everyone. We believe everyone - of every race, gender, sexuality, age, location and income - deserves the opportunity to live their healthiest life. Today, however, there are still far too many barriers to good health which are disproportionately experienced by people of color, historically marginalized groups and those with lower incomes. We are committed to mitigating our impact on the environment and enabling and delivering equitable care that addresses health disparities and improves health outcomes - an enterprise priority reflected in our mission.

Posted 1 day ago

Apply

8.0 - 12.0 years

0 Lacs

maharashtra

On-site

As a Java Back-end Developer at NTT DATA, you will play a key role in designing, developing, and maintaining scalable backend services and APIs. You will collaborate closely with frontend engineers, DevOps, and product managers to ensure the delivery of end-to-end features. It will be essential for you to optimize system performance, scalability, and reliability while adhering to best practices. Your responsibility will also include writing clean, maintainable, and well-tested code to contribute to the overall success of the projects. To excel in this role, you should possess a minimum of 8 years of professional experience as a backend developer. Proficiency in at least one backend language and framework, such as Java/Spring, is a must. Additionally, you should have a proven track record in designing and maintaining RESTful APIs or JPAs. A solid understanding of relational and NoSQL databases (e.g., MySQL, Redis), as well as familiarity with containerization and orchestration tools (e.g., Docker, Kubernetes), will be beneficial for this position. Experience with cloud services, such as AWS, GCP, or Azure, along with knowledge of CI/CD pipelines and DevOps practices, is highly desirable. You should demonstrate strong debugging, optimization, and problem-solving skills to tackle complex issues efficiently. Moreover, good communication skills in English and the ability to work independently in a remote and distributed team setup are essential to succeed in this role. At NTT DATA, we value diversity and inclusion in our workplace and provide equal opportunities for all. Join us in our mission to push the boundaries of technical excellence and make a difference for our clients and society. Your career growth and development are important to us, and we encourage you to embrace new opportunities and challenges that come your way. Be part of our global team where you can continue to grow, belong, and thrive, while impacting short to medium-term goals through your personal effort and influence over team members. This is an Equal Opportunity Employer offering a hybrid working environment to support a healthy work-life balance. If you are a seasoned professional with complete knowledge and understanding of backend development, ready to take on diverse challenges and contribute to innovative solutions, we welcome you to apply for this exciting opportunity based in Bangalore with 8-12 years of experience.,

Posted 1 day ago

Apply

5.0 - 9.0 years

0 Lacs

pune, maharashtra

On-site

As a skilled Backend Engineer (Scala, K8s & CI/CD) with 48 years of experience, you will be responsible for building and maintaining scalable microservices using Scala, Akka, and/or LAGOM. Your expertise in designing and implementing containerized applications with Docker and managing them using Kubernetes (K8s) will be essential. Handling real-time messaging systems with Apache Pulsar and integrating backend services with databases using the Slick Connector and PostgreSQL are key aspects of this role. Additionally, you will implement advanced search and analytics using ElasticSearch and work on GitLab CI/CD pipelines to support deployment workflows. Collaboration with cross-functional teams to write clean, maintainable, and well-documented code is crucial for success in this position. Your 48 years of experience in Scala development, proficiency in Akka or LAGOM frameworks, and strong understanding of microservice architectures will be valuable assets in this role. Experience with containerization and orchestration using Docker and Kubernetes, along with hands-on experience in Apache Pulsar, PostgreSQL, and ElasticSearch, will be essential for the successful execution of your responsibilities. Proficiency in GitLab, CI/CD pipelines, and deployment automation, as well as a solid understanding of software engineering best practices, will contribute to the overall success of the projects you will be working on. It would be nice to have experience with other messaging systems like Kafka or RabbitMQ, familiarity with monitoring and logging tools such as Prometheus, Grafana, and ELK stack, exposure to frontend technologies like React and Angular for occasional integration, and an understanding of cloud platforms like AWS, GCP, or Azure. A background in financial, logistics, or real-time data processing domains would also be beneficial. A Bachelors or Masters degree in Computer Science, Engineering, or a related field is required for this position. In return, we offer a competitive salary and performance-based bonuses, an opportunity to work on complex, real-world distributed systems, a collaborative work culture that values innovation and problem-solving, a modern tech stack and infrastructure with freedom to innovate, learning and development support including courses, certifications, and mentorship, onsite cafeteria, ergonomic workstations, and team-building events, as well as long-term stability and career growth within a product-focused environment.,

Posted 1 day ago

Apply

5.0 - 8.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Skills: AI/ML, Machine Learning, TensorFlow, CI/Cd, AWS, devop, azure, Job Description Minimum 5 - 8 years of experience in Data Science and Machine Learning. In-depth knowledge of machine learning, Deep learning in Computer Vision (CV), and generative AI techniques. Proficiency in programming languages such as Python and frameworks like TensorFlow or PyTorch. Strong understanding of NLP techniques and frameworks such as BERT, GPT, or Transformer models. Experience with cloud platforms such as Azure, AWS, or GCP and deploying AI solutions in a cloud environment. Expertise in data engineering, including data curation, cleaning, and preprocessing. Knowledge of trusted AI practices, like fairness, transparency, and accountability in AI models and systems. Strong collaboration with engineering teams to ensure seamless integration and deployment of AI models. Excellent problem-solving and analytical skills, with the ability to translate business requirements into technical solutions. Strong communication and interpersonal skills, with the ability to collaborate effectively with stakeholders at various levels. Understanding of data privacy, security, and ethical considerations in AI applications. Track record of driving innovation and staying updated with the lates Mandatory Skills Generative AI techniques, NLP techniques, BERT, GPT, or Transformer models Azure Open AI GPT models, Hugging Face Transformers, Prompt Engineering Python, Knowledge on frameworks like TensorFlow or PyTorch, LangChain, R Deploying AI solutions in Azure, AWS, or GCP. Deep learning in Computer Vision (CV), Large Language Model (LLM) Good To Have Skills Knowledge on DevOps and MLOps practices Implement CI/CD pipelines Utilize tools such as Docker, Kubernetes, and Git to build and manage AI pipelines.

Posted 1 day ago

Apply

5.0 - 10.0 years

0 Lacs

haryana

On-site

As a Senior Full Stack Software Developer, you will be responsible for designing, developing, and maintaining both front-end and back-end systems to create scalable, secure, and high-performance web applications. Your role will involve leading technical projects, mentoring junior developers, and ensuring the implementation of best practices throughout the development lifecycle. Your impact in this role will include building front-end systems using technologies such as React, Angular, or Vue.js, as well as developing back-end systems with Node.js, Python, or Java. You will also be tasked with designing and optimizing databases (SQL, NoSQL) and APIs (REST, GraphQL), implementing cloud solutions (AWS, Azure) and DevOps tools (Docker, Kubernetes), writing clean and maintainable code, and conducting various types of testing (unit, integration, CI/CD). Collaboration with teams and providing technical leadership will also be key aspects of your responsibilities. In terms of qualifications, you must possess a Bachelor's or Master's degree in computer science or a related field, along with at least 5-10 years of professional experience in software development. You should have a strong DevOps mindset and hands-on experience with Docker, VMs, container orchestration, cloud platforms (AWS, Azure, GCP), CI/CD pipelines, Git-based workflows, and Infrastructure as Code (e.g., Terraform, Pulumi). Solid networking fundamentals and experience in API design, data modeling, and authentication mechanisms are also required. Additionally, you should be comfortable with backend development in at least one modern language (Go, Rust, C#) and possess strong frontend development skills using frameworks like React, Angular, Vue, or Web Components. An understanding of design systems, CSS, responsive UI, and the ability to quickly learn new languages and tools independently are essential. Experience working in cross-functional teams and agile environments is highly valued. Desirable qualifications include contributions to or experience with open-source projects, a cross-disciplinary understanding of UX/UI design principles, familiarity with testing frameworks, quality assurance practices, monitoring, and observability tools. Experience with hybrid or distributed architecture, exposure to WebAssembly, micro frontends, edge computing, and a background in security best practices for web and cloud applications would be advantageous for this role.,

Posted 1 day ago

Apply

7.0 - 9.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Job Description Alimentation Couche-Tard Inc., (ACT) is a global Fortune 200 company. A leader in the convenience store and fuel space with over 16,700 stores in 31 countries, serving more than 9 million customers each day. At Circle K, we are building a best-in-class global data engineering practice to support intelligent business decision-making and drive value across our retail ecosystem. As we scale our engineering capabilities, we’re seeking a Lead Data Engineer to serve as both a technical leader and people coach for our India-based Data Enablement pod. This role will oversee the design, delivery, and maintenance of critical cross-functional datasets and reusable data assets while also managing a group of talented engineers in India. This position plays a dual role: contributing hands-on to engineering execution while mentoring and developing engineers in their technical careers. About The Role The ideal candidate combines deep technical acumen, stakeholder awareness, and a people-first leadership mindset. You’ll collaborate with global tech leads, managers, platform teams, and business analysts to build trusted, performant data pipelines that serve use cases beyond traditional data domains. Responsibilities Design, develop, and maintain scalable pipelines across ADF, Databricks, Snowflake, and related platforms Lead the technical execution of non-domain specific initiatives (e.g. reusable dimensions, TLOG standardization, enablement pipelines) Architect data models and re-usable layers consumed by multiple downstream pods Guide platform-wide patterns like parameterization, CI/CD pipelines, pipeline recovery, and auditability frameworks Mentoring and coaching team Partner with product and platform leaders to ensure engineering consistency and delivery excellence Act as an L3 escalation point for operational data issues impacting foundational pipelines Own engineering best practices, sprint planning, and quality across the Enablement pod Contribute to platform discussions and architectural decisions across regions Job Requirements Education Bachelor’s or master’s degree in computer science, Engineering, or related field Relevant Experience 7-9 years of data engineering experience with strong hands-on delivery using ADF, SQL, Python, Databricks, and Spark Experience designing data pipelines, warehouse models, and processing frameworks using Snowflake or Azure Synapse Knowledge And Preferred Skills Proficient with CI/CD tools (Azure DevOps, GitHub) and observability practices. Solid grasp of data governance, metadata tagging, and role-based access control. Proven ability to mentor and grow engineers in a matrixed or global environment. Strong verbal and written communication skills, with the ability to operate cross-functionally. Certifications in Azure, Databricks, or Snowflake are a plus. Strong Knowledge of Data Engineering concepts (Data pipelines creation, Data Warehousing, Data Marts/Cubes, Data Reconciliation and Audit, Data Management). Working Knowledge of Dev-Ops processes (CI/CD), Git/Jenkins version control tool, Master Data Management (MDM) and Data Quality tools. Strong Experience in ETL/ELT development, QA and operation/support process (RCA of production issues, Code/Data Fix Strategy, Monitoring and maintenance). Hands on experience in Databases like (Azure SQL DB, Snowflake, MySQL/, Cosmos DB etc.), File system (Blob Storage), Python/Unix shell Scripting. ADF, Databricks and Azure certification is a plus. Technologies we use : Databricks, Azure SQL DW/Synapse, Snowflake, Azure Tabular, Azure Data Factory, Azure Functions, Azure Containers, Docker, DevOps, Python, PySpark, Scripting (Powershell, Bash), Git, Terraform, Power BI

Posted 1 day ago

Apply

5.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Job Description Alimentation Couche-Tard Inc., (ACT) is a global Fortune 200 company. A leader in the convenience store and fuel space with over 17,000 stores in 31 countries, serving more than 6 million customers each day It is an exciting time to be a part of the growing Data Engineering team at Circle K. We are driving a well-supported cloud-first strategy to unlock the power of data across the company and help teams to discover, value and act on insights from data across the globe. With our strong data pipeline, this position will play a key role partnering with our Technical Development stakeholders to enable analytics for long term success. About The Role We are looking for a Senior Data Engineer with a collaborative, “can-do” attitude who is committed & strives with determination and motivation to make their team successful. A Sr. Data Engineer who has experience architecting and implementing technical solutions as part of a greater data transformation strategy. This role is responsible for hands on sourcing, manipulation, and delivery of data from enterprise business systems to data lake and data warehouse. This role will help drive Circle K’s next phase in the digital journey by modeling and transforming data to achieve actionable business outcomes. The Sr. Data Engineer will create, troubleshoot and support ETL pipelines and the cloud infrastructure involved in the process, will be able to support the visualizations team. Roles and Responsibilities Collaborate with business stakeholders and other technical team members to acquire and migrate data sources that are most relevant to business needs and goals. Demonstrate deep technical and domain knowledge of relational and non-relation databases, Data Warehouses, Data lakes among other structured and unstructured storage options. Determine solutions that are best suited to develop a pipeline for a particular data source. Develop data flow pipelines to extract, transform, and load data from various data sources in various forms, including custom ETL pipelines that enable model and product development. Efficient in ETL/ELT development using Azure cloud services and Snowflake, Testing and operation/support process (RCA of production issues, Code/Data Fix Strategy, Monitoring and maintenance). Work with modern data platforms including Snowflake to develop, test, and operationalize data pipelines for scalable analytics delivery. Provide clear documentation for delivered solutions and processes, integrating documentation with the appropriate corporate stakeholders. Identify and implement internal process improvements for data management (automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability). Stay current with and adopt new tools and applications to ensure high quality and efficient solutions. Build cross-platform data strategy to aggregate multiple sources and process development datasets. Proactive in stakeholder communication, mentor/guide junior resources by doing regular KT/reverse KT and help them in identifying production bugs/issues if needed and provide resolution recommendation. Job Requirements Bachelor’s Degree in Computer Engineering, Computer Science or related discipline, Master’s Degree preferred. 5+ years of ETL design, development, and performance tuning using ETL tools such as SSIS/ADF in a multi-dimensional Data Warehousing environment. 5+ years of experience with setting up and operating data pipelines using Python or SQL 5+ years of advanced SQL Programming: PL/SQL, T-SQL 5+ years of experience working with Snowflake, including Snowflake SQL, data modeling, and performance optimization. Strong hands-on experience with cloud data platforms such as Azure Synapse and Snowflake for building data pipelines and analytics workloads. 5+ years of strong and extensive hands-on experience in Azure, preferably data heavy / analytics applications leveraging relational and NoSQL databases, Data Warehouse and Big Data. 5+ years of experience with Azure Data Factory, Azure Synapse Analytics, Azure Analysis Services, Azure Databricks, Blob Storage, Databricks/Spark, Azure SQL DW/Synapse, and Azure functions. 5+ years of experience in defining and enabling data quality standards for auditing, and monitoring. Strong analytical abilities and a strong intellectual curiosity In-depth knowledge of relational database design, data warehousing and dimensional data modeling concepts Understanding of REST and good API design. Experience working with Apache Iceberg, Delta tables and distributed computing frameworks Strong collaboration and teamwork skills & excellent written and verbal communications skills. Self-starter and motivated with ability to work in a fast-paced development environment. Agile experience highly desirable. Proficiency in the development environment, including IDE, database server, GIT, Continuous Integration, unit-testing tool, and defect management tools. Knowledge Strong Knowledge of Data Engineering concepts (Data pipelines creation, Data Warehousing, Data Marts/Cubes, Data Reconciliation and Audit, Data Management). Strong working knowledge of Snowflake, including warehouse management, Snowflake SQL, and data sharing techniques. Experience building pipelines that source from or deliver data into Snowflake in combination with tools like ADF and Databricks. Working Knowledge of Dev-Ops processes (CI/CD), Git/Jenkins version control tool, Master Data Management (MDM) and Data Quality tools. Strong Experience in ETL/ELT development, QA and operation/support process (RCA of production issues, Code/Data Fix Strategy, Monitoring and maintenance). Hands on experience in Databases like (Azure SQL DB, MySQL/, Cosmos DB etc.), File system (Blob Storage), Python/Unix shell Scripting. ADF, Databricks and Azure certification is a plus. Technologies we use: Databricks, Azure SQL DW/Synapse, Azure Tabular, Azure Data Factory, Azure Functions, Azure Containers, Docker, DevOps, Python, PySpark, Scripting (Powershell, Bash), Git, Terraform, Power BI, Snowflake

Posted 1 day ago

Apply

5.0 - 9.0 years

0 Lacs

karnataka

On-site

You are a highly experienced Senior Python & AI Engineer who will be responsible for leading the development of cutting-edge AI/ML solutions. Your role will involve architecting solutions, driving technical strategy, mentoring team members, and ensuring timely delivery of key projects. As a Technical Leader, you will architect, design, and implement scalable AI/ML systems and backend services using Python. You will also oversee the design and development of machine learning pipelines, APIs, and model deployment workflows. Your responsibilities will include reviewing code, establishing best practices, and driving technical quality across the team. In terms of Team Management, you will lead a team of data scientists, ML engineers, and Python developers. Providing mentorship, coaching, and performance evaluations will be vital. You will facilitate sprint planning, daily stand-ups, and retrospectives using Agile/Scrum practices. Additionally, coordinating with cross-functional teams such as product, QA, DevOps, and UI/UX will be necessary to deliver features on time. Your focus on AI/ML Development will involve developing and fine-tuning models for NLP, computer vision, or structured data analysis based on project requirements. Optimizing model performance and inference using frameworks like PyTorch, TensorFlow, or Hugging Face will be part of your responsibilities. Implementing model monitoring, drift detection, and retraining strategies will also be crucial. Project & Stakeholder Management will require you to work closely with product managers to translate business requirements into technical deliverables. You will own the end-to-end delivery of features, ensuring they meet performance and reliability goals. Providing timely updates to leadership and managing client communication if necessary will also be part of your role. Your Required Skills & Experience include a minimum of 5 years of professional experience with Python and at least 2 years working on AI/ML projects. You should have a strong understanding of ML/DL concepts, algorithms, and data preprocessing. Experience with frameworks like PyTorch, TensorFlow, scikit-learn, FastAPI/Django/Flask, deployment tools like Docker, Kubernetes, MLflow, and cloud platforms such as AWS/GCP/Azure is essential. In terms of leadership, you should have at least 3 years of experience in leading engineering or AI teams, along with excellent planning, estimation, and people management skills. Strong communication and collaboration skills are also required. Preferred Qualifications for this role include a Masters or PhD in Computer Science, Data Science, AI/ML, or related fields, exposure to MLOps practices, experience with RAG, LLMs, transformers, or vector databases, and prior experience in fast-paced startup or product environments. In return, you will have the opportunity to lead cutting-edge AI initiatives, encounter task variety and challenging opportunities, benefit from high autonomy, a flat hierarchy, and fast decision-making, as well as receive competitive compensation and performance-based incentives.,

Posted 1 day ago

Apply

3.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Job Description Alimentation Couche-Tard Inc., (ACT) is a global Fortune 200 company. A leader in the convenience store and fuel space with over 17,000 stores in 31 countries, serving more than 6 million customers each day It is an exciting time to be a part of the growing Data Engineering team at Circle K. We are driving a well-supported cloud-first strategy to unlock the power of data across the company and help teams to discover, value and act on insights from data across the globe. With our strong data pipeline, this position will play a key role partnering with our Technical Development stakeholders to enable analytics for long term success. About The Role We are looking for a Data Engineer with a collaborative, “can-do” attitude who is committed & strives with determination and motivation to make their team successful. A Data Engineer who has experience implementing technical solutions as part of a greater data transformation strategy. This role is responsible for hands on sourcing, manipulation, and delivery of data from enterprise business systems to data lake and data warehouse. This role will help drive Circle K’s next phase in the digital journey by transforming data to achieve actionable business outcomes. Roles and Responsibilities Collaborate with business stakeholders and other technical team members to acquire and migrate data sources that are most relevant to business needs and goals Demonstrate technical and domain knowledge of relational and non-relational databases, Data Warehouses, Data lakes among other structured and unstructured storage options Determine solutions that are best suited to develop a pipeline for a particular data source Develop data flow pipelines to extract, transform, and load data from various data sources in various forms, including custom ETL pipelines that enable model and product development Efficient in ELT/ETL development using Azure cloud services and Snowflake, including Testing and operational support (RCA, Monitoring, Maintenance) Work with modern data platforms including Snowflake to develop, test, and operationalize data pipelines for scalable analytics deliver Provide clear documentation for delivered solutions and processes, integrating documentation with the appropriate corporate stakeholders Identify and implement internal process improvements for data management (automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability) Stay current with and adopt new tools and applications to ensure high quality and efficient solutions Build cross-platform data strategy to aggregate multiple sources and process development datasets Proactive in stakeholder communication, mentor/guide junior resources by doing regular KT/reverse KT and help them in identifying production bugs/issues if needed and provide resolution recommendation Job Requirements Bachelor’s degree in Computer Engineering, Computer Science or related discipline, Master’s Degree preferred 3+ years of ETL design, development, and performance tuning using ETL tools such as SSIS/ADF in a multi-dimensional Data Warehousing environment 3+ years of experience with setting up and operating data pipelines using Python or SQL 3+ years of advanced SQL Programming: PL/SQL, T-SQL 3+ years of experience working with Snowflake, including Snowflake SQL, data modeling, and performance optimization Strong hands-on experience with cloud data platforms such as Azure Synapse and Snowflake for building data pipelines and analytics workloads 3+ years of strong and extensive hands-on experience in Azure, preferably data heavy / analytics applications leveraging relational and NoSQL databases, Data Warehouse and Big Data 3+ years of experience with Azure Data Factory, Azure Synapse Analytics, Azure Analysis Services, Azure Databricks, Blob Storage, Databricks/Spark, Azure SQL DW/Synapse, and Azure functions 3+ years of experience in defining and enabling data quality standards for auditing, and monitoring Strong analytical abilities and a strong intellectual curiosity. In-depth knowledge of relational database design, data warehousing and dimensional data modeling concepts Understanding of REST and good API design Experience working with Apache Iceberg, Delta tables and distributed computing frameworks Strong collaboration, teamwork skills, excellent written and verbal communications skills Self-starter and motivated with ability to work in a fast-paced development environment Agile experience highly desirable Proficiency in the development environment, including IDE, database server, GIT, Continuous Integration, unit-testing tool, and defect management tools Preferred Skills Strong Knowledge of Data Engineering concepts (Data pipelines creation, Data Warehousing, Data Marts/Cubes, Data Reconciliation and Audit, Data Management) Strong working knowledge of Snowflake, including warehouse management, Snowflake SQL, and data sharing techniques Experience building pipelines that source from or deliver data into Snowflake in combination with tools like ADF and Databricks Working Knowledge of Dev-Ops processes (CI/CD), Git/Jenkins version control tool, Master Data Management (MDM) and Data Quality tools Strong Experience in ETL/ELT development, QA and operation/support process (RCA of production issues, Code/Data Fix Strategy, Monitoring and maintenance) Hands on experience in Databases like (Azure SQL DB, MySQL/, Cosmos DB etc.), File system (Blob Storage), Python/Unix shell Scripting ADF, Databricks and Azure certification is a plus Technologies we use : Databricks, Azure SQL DW/Synapse, Azure Tabular, Azure Data Factory, Azure Functions, Azure Containers, Docker, DevOps, Python, PySpark, Scripting (Powershell, Bash), Git, Terraform, Power BI, Snowflake

Posted 1 day ago

Apply

4.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Job Description Alimentation Couche-Tard Inc., (ACT) is a global Fortune 200 company. A leader in the convenience store and fuel space, it has footprint across 31 countries and territories. Circle K India Data & Analytics team is an integral part of ACT’s Global Data & Analytics Team, and the Data Scientist will be a key player on this team that will help grow analytics globally at ACT. The hired candidate will partner with multiple departments, including Global Marketing, Merchandising, Global Technology, and Business Units. About The Role The incumbent will be responsible for delivering advanced analytics projects that drive business results including interpreting business, selecting the appropriate methodology, data cleaning, exploratory data analysis, model building, and creation of polished deliverables. Responsibilities Analytics & Strategy Analyse large-scale structured and unstructured data; develop deep-dive analyses and machine learning models in retail, marketing, merchandising, and other areas of the business Utilize data mining, statistical and machine learning techniques to derive business value from store, product, operations, financial, and customer transactional data Apply multiple algorithms or architectures and recommend the best model with in-depth description to evangelize data-driven business decisions Utilize cloud setup to extract processed data for statistical modelling and big data analysis, and visualization tools to represent large sets of time series/cross-sectional data Operational Excellence Follow industry standards in coding solutions and follow programming life cycle to ensure standard practices across the project Structure hypothesis, build thoughtful analyses, develop underlying data models and bring clarity to previously undefined problems Partner with Data Engineering to build, design and maintain core data infrastructure, pipelines and data workflows to automate dashboards and analyses Stakeholder Engagement Working collaboratively across multiple sets of stakeholders – Business functions, Data Engineers, Data Visualization experts to deliver on project deliverables Articulate complex data science models to business teams and present the insights in easily understandable and innovative formats Job Requirements Education Bachelor’s degree required, preferably with a quantitative focus (Statistics, Business Analytics, Data Science, Math, Economics, etc.) Master’s degree preferred (MBA/MS Computer Science/M.Tech Computer Science, etc.) Relevant Experience 3–4 years of relevant working experience in a data science/advanced analytics role Behavioural Skills Delivery Excellence Business disposition Social intelligence Innovation and agility Knowledge Functional Analytics (Supply chain analytics, Marketing Analytics, Customer Analytics) Statistical modelling using Analytical tools (R, Python, KNIME, etc.) and use big data technologies Knowledge of statistics and experimental design (A/B testing, hypothesis testing, causal inference) Practical experience building scalable ML models, feature engineering, model evaluation metrics, and statistical inference. Practical experience deploying models using MLOps tools and practices (e.g., MLflow, DVC, Docker, etc.) Strong coding proficiency in Python (Pandas, Scikit-learn, PyTorch/TensorFlow, etc.) Big data technologies & framework (AWS, Azure, GCP, Hadoop, Spark, etc.) Enterprise reporting systems, relational (MySQL, Microsoft SQL Server etc.), non-relational (MongoDB, DynamoDB) database management systems and Data Engineering tools Business intelligence & reporting (Power BI, Tableau, Alteryx, etc.) Microsoft Office applications (MS Excel, etc.)

Posted 1 day ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies