Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
8.0 years
0 Lacs
Pune, Maharashtra, India
On-site
We are seeking a highly skilled and experienced Lead Software Engineer to spearhead the design, development, and maintenance of cutting-edge backend systems and microservices. The ideal candidate will excel in Java development, possess deep familiarity with cloud technologies (AWS/Azure) , and have a proven track record of working in agile, collaborative environments. Responsibilities Develop, enhance and maintain code Build backend microservices and REST APIs Conduct comprehensive unit testing Review code for consistency and quality Follow best practices, including code review, unit testing, CI and other development standards Actively participate in SCRUM ceremonies and collaborate across teams Estimate development efforts and assist in project planning Mentor junior developers and share expertise with peers Requirements Total 8+ years of development work experience Hands-on development experience with Java and Spring Framework Knowledge of APIs and microservices architecture Proficiency in cloud technologies like AWS or Azure Competency in development practices, such as CI/CD pipelines and testing frameworks Familiarity with agile methodologies and SCRUM practices Capability to collaborate and communicate effectively in cross-functional teams Nice to have Experience with Apache Kafka Familiarity with containerization technologies such as Docker and Kubernetes Background in financial services, particularly wealth management
Posted 5 days ago
0 years
0 Lacs
Noida, Uttar Pradesh, India
Remote
💡 Are you passionate about data architecture and solving real-world problems with modern Big Data technologies? About The Role We are seeking a highly skilled Resident Solution Architect to join our team. This professional will work closely with clients to understand their business needs, architect solutions, and implement modern Big Data platform technologies. 🔧 Key Responsibilities: Collaborate with customers to understand their business goals, data architecture, and technical requirements Design end-to-end solutions that leverage Starburst products to address customer needs, including data access, analytics, and performance optimization Develop architectural diagrams, technical specifications, and implementation plans for customer projects Lead the implementation and deployment of Starburst solutions, working closely with customer teams and internal stakeholders Provide technical guidance and best practices to customers on using Starburst products effectively Work with partners to train and upskill external personnel on Starburst Products Collaborate with internal teams and external partners to create resources, best practices, and delivery processes Troubleshoot and resolve technical issues during implementation and operation Stay current on industry trends, emerging technologies, and data management best practices 📚 Required Experience & Knowledge: Bachelor's degree in Computer Science, Engineering, or a related field Professional experience in technical roles such as solution architect, data engineer, or software engineer Strong understanding of data architecture principles, including data modeling, data integration, and data warehousing Proficiency in SQL and experience with distributed query engines (e.g., Presto, Trino, Apache Spark) Strong problem-solving skills and strategic thinking for technical and business challenges Excellent communication and interpersonal skills Experience with cloud platforms (e.g., AWS, Azure, Google Cloud) and containerization (e.g., Docker, Kubernetes) is a plus Experience with open-source technologies and/or contributions to the open-source community is a plus Experience delivering technical training, internally or externally, is a plus Fluent English is required for daily communication with international clients and teams 📌 Other Information: This is a client-facing, hands-on role with high impact on customer success Remote work opportunity Clients are typically based in the USA, and work must be delivered within the customer’s timezone 💼 Why Join Us? Work on impactful, client-facing projects with cutting-edge data technologies Collaborate with a global, experienced team of professionals Flexible remote work setup and opportunity to grow within a fast-paced environment 🚀 Ready to join us? Apply now and bring your data expertise to the next level!
Posted 5 days ago
3.0 years
0 Lacs
Mohali district, India
On-site
Software Developer - Full Stack (React/Python) nCare, Inc, California, US www.ncaremd.com Position Summary We are seeking an experienced Software Developer with 3+ years of hands-on development experience to join our dynamic engineering team. The ideal candidate will be proficient in modern frontend technologies (React, NextJS), backend development (Python), databases(PostgreSQL) and cloud platforms (Google Cloud), while demonstrating expertise in AI-powered development tools and workflows. Open to working on project/contract basis too. Key Responsibilities Development & Engineering Design, develop, and maintain scalable web applications using React and NextJS Build robust backend services and APIs using Python (FastAPI frameworks) Design and optimize PostgreSQL databases, write efficient queries, and manage database migrations Implement responsive, user-friendly interfaces with modern JavaScript, HTML5, and CSS3 Develop and optimize database interactions and data pipelines Ensure code quality through comprehensive testing, code reviews, and debugging Cloud & Infrastructure Deploy and manage applications on Google Cloud Platform (GCP) Utilize Google Cloud services including Cloud Run, Cloud Storage, Cloud SQL, Vertex AI and other services. Implement CI/CD pipelines and DevOps best practices Monitor application performance and optimize for scalability AI-Enhanced Development Leverage AI development tools (GitHub Copilot, Gemini Code Assist, or similar) to accelerate development cycles Integrate AI/ML capabilities into applications using Google Cloud AI services Stay current with emerging AI tools and incorporate them into development workflows Contribute to improving team productivity through AI-assisted coding practices Collaboration & Communication Work closely with cross-functional teams including designers, product managers, and other developers Participate in code reviews and provide constructive feedback to team members Document technical solutions and maintain clear project documentation Communicate technical concepts effectively to both technical and non-technical stakeholders Required Qualifications Technical Skills 3+ years of professional software development experience Frontend Development: Proficiency in React.js and NextJS Strong knowledge of JavaScript (ES6+), HTML5, CSS3 Experience with state management (Redux, Context API) Familiarity with modern build tools (Webpack, Vite) and package managers (npm, yarn) Backend Development: Strong Python programming skills Experience with web frameworks (Django, Flask, or FastAPI) Knowledge of RESTful API design and implementation Proficiency with PostgreSQL database design, optimization, and management Experience with SQL queries, database migrations, and ORM frameworks Additional experience with NoSQL databases is a plus Google Cloud Platform: Hands-on experience with GCP services and deployment Understanding of cloud architecture patterns Experience with containerization (Docker) and orchestration AI Development Tools: Demonstrated experience using AI-powered development tools (Copilot, ChatGPT, Claude, Gemini, etc.) Ability to effectively prompt and collaborate with AI assistants Experience optimizing development workflows with AI tools Core Competencies Problem-Solving: Strong analytical and critical thinking skills with ability to debug complex issues Quick Learner: Demonstrated ability to rapidly adapt to new technologies and frameworks Communication: Excellent verbal and written communication skills Team Collaboration: Experience working in agile development environments Additional Requirements Experience with version control systems (Git) and collaborative development workflows Knowledge building AI agents. Understanding of software testing principles and frameworks (Jest, pytest) Knowledge of web performance optimization and security best practices Familiarity with responsive design and cross-browser compatibility Preferred Qualifications Bachelor's degree in Computer Science, Engineering, or equivalent practical experience Experience with TypeScript Knowledge of GraphQL and modern API technologies Familiarity with machine learning concepts and implementation Previous experience in agile/scrum development methodologies Contributions to open-source projects or active GitHub profile Experience with monitoring and logging tools
Posted 5 days ago
0 years
0 Lacs
Bhubaneswar, Odisha, India
Remote
📍 Location : Bhubaneswar / Remote 📄 Internship Certificate : Provided upon successful completion 💰 Stipend : Up to ₹3,000/month (Performance-based) ⏳ Duration : 2–3 months (can be extended based on performance) 🎯 Who Can Apply? We are looking for passionate tech and stock market enthusiasts who want to gain hands-on experience in backend and full-stack development, especially in real-world financial and AI-based applications. If you are someone who follows the stock market closely , understands how it works, and is eager to apply your technical skills to real financial datasets, we want you on our team. ✅ Eligibility Freshers or students pursuing: B.Tech / B.E / M.Tech BCA / MCA B.Sc (IT) or equivalent Strong interest in stock markets and financial data Solid enthusiasm for web backend/hybrid development Basic understanding of Python, SQL, and REST APIs Willingness to learn and work with live stock market data 🔧 Key Responsibilities Backend Development : Work with Python, Pandas, NumPy, PostgreSQL/SQL, and Node.js Stock Market Data Management : Clean, analyze, and manage datasets from Indian stock markets (NSE/BSE) API Development : Build and integrate REST APIs for frontend consumption Frontend Collaboration : Coordinate with React developers; work with HTML, CSS, and JS Cloud Deployment : Assist in deploying backend services on cloud (AWS/GCP/Oracle Free Tier) AI/ML Integration : Support AI-driven features for financial forecasting and analytics 📚 Learning Opportunities Real-world exposure to financial APIs , trading systems, and live market data Work closely with full-stack and AI developers on fintech applications Get started with containerization (Docker) and automation (GitHub Actions) Deploy and maintain apps in cloud environments Contribute to AI-powered tools in finance and trading 💡 Application Instructions 📌 Share your GitHub profile (with relevant code or projects). 📌 If you don’t have one, complete this optional sample task : "Build a webpage listing companies on the left panel. When a company is clicked, show relevant stock price charts. Use any sample dataset or mock stock data." This helps us evaluate your practical understanding, stock market awareness, and coding ability. 🎁 Perks & Benefits Internship Completion Certificate & Letter of Recommendation Real-world exposure to stock market tech and AI Learn industry-standard tools, version control, and deployment practices Flexible work hours (100% remote) Opportunity to extend the internship or join long-term based on performance 🚀 Ready to merge your passion for the stock market with backend development? Apply now and start building applications that matter in the world of fintech!
Posted 5 days ago
1.0 years
0 Lacs
Surat, Gujarat, India
On-site
We're seeking an experienced PHP Laravel developer to join our team. As a PHP Laravel developer, you will be responsible for designing, developing, and maintaining web applications using the Laravel framework. You will work closely with our cross-functional teams to identify and prioritize project requirements, write clean and efficient code, and implement robust security measures to protect against vulnerabilities. Responsibilities Design, develop, and maintain web applications using PHP and Laravel framework Write clean, efficient, and well-documented code that meets industry standards Collaborate with cross-functional teams to identify and prioritize project requirements Implement robust security measures to protect against vulnerabilities and ensure data integrity Optimize application performance and troubleshoot issues using debugging tools and techniques Stay up-to-date with industry trends and emerging technologies, and apply this knowledge to improve our applications Participate in code reviews and contribute to the improvement of the codebase Collaborate with QA engineers to ensure that applications meet quality and functionality standards Troubleshoot and resolve technical issues, and provide support to other teams as needed Requirements 1+ years of experience in PHP development, with a strong focus on Laravel framework Strong understanding of PHP, Laravel, and related technologies (e.g. MySQL, HTML, CSS, JavaScript) Experience with front-end frameworks like Vue.js or React is a plus Strong understanding of object-oriented programming principles and design patterns Experience with version control systems like Git Strong problem-solving skills and attention to detail Excellent communication and collaboration skills Ability to work in a fast-paced environment and meet deadlines Experience with integrating third-party APIs (e.g. payment gateways, social media, etc.) and understanding of API design principles Nice to Have Experience with cloud platforms like AWS or Azure Knowledge of containerization using Docker Experience with DevOps practices and tools like Jenkins or Travis CI Familiarity with agile development methodologies Benefits Competitive salary and benefits package Opportunity to work on challenging projects and contribute to the growth of our company Collaborative and dynamic work environment Professional development opportunities
Posted 5 days ago
10.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Job Description: We are seeking a highly skilled Java Application Specialist with over 10 years of professional experience in core Java, C++, Tomcat, Oracle/MySQL, COTS Product (Neustar: Transunion) and microservices development. The ideal candidate will have a strong background in developing and support of web applications with a deep understanding of the Software Development Lifecycle (SDLC), and experience in agile methodologies. Roles and Responsibilities: Participate in all phases of the development and life cycle including design, coding, testing, production release and support. Work in an agile team environment to deliver high-quality code. Drive innovation through rapid prototyping and iterative development. Troubleshoot and fix bugs, performance issues, and display issues. Collaborate effectively in an open, highly collaborative team environment. Architect, Design and develop cross-functional, multi-platform application systems. Engage with Specialists, Engineers, Architects, Product Managers, and Business stakeholders to identify technical and functional requirements. Author/Review high-quality code with a strong emphasis on automated testing and validation. Communicate clearly and document solutions to ensure reproducibility. Must-Have Skills: 10 +years of practical experience in Java/JEE programming. Proficiency in Java 8 or above and microservices development. Experience working with COTS Product (Neustar: Transunion) Extensive experience with Web Services (REST/SOAP). Strong hands-on experience in Core Java/J2EE, Spring MVC, and Spring Boot. Experience with Object-Oriented Design, Design Patterns, and test-driven development. Proficiency in RDBMS (Oracle), MySQL. Experience in Apache/PERL development. Experience with build tools such as Maven/Gradle. Proficient in distributed version control tools (Git/GitHub/Bitbucket). Practical experience with CI/CD pipelines, particularly with Jenkins. Experience in agile software development environments. Strong unit testing/Mockito experience. Excellent communication skills with a passion for documentation. Good-to-Have Skills: Experience of popular application servers like Tomcat, WebLogic, JBoss, and Glassfish. Experience with cloud platforms, particularly Azure, and containerization using Docker. Familiarity with UNIX (Linux) environments. Basic knowledge of front-end technologies such as Angular, React, or NodeJS. Knowledge of distributed systems and performance tuning. Java certifications & Microsoft Certified Azure Developer are a plus. Experience with process management software like JIRA. Qualifications: Bachelor’s or master’s degree in computer science or a related field. #SoftwareEngineering Weekly Hours: 40 Time Type: Regular Location: Chennai, Tamil Nadu, India It is the policy of AT&T to provide equal employment opportunity (EEO) to all persons regardless of age, color, national origin, citizenship status, physical or mental disability, race, religion, creed, gender, sex, sexual orientation, gender identity and/or expression, genetic information, marital status, status with regard to public assistance, veteran status, or any other characteristic protected by federal, state or local law. In addition, AT&T will provide reasonable accommodations for qualified individuals with disabilities. AT&T is a fair chance employer and does not initiate a background check until an offer is made.
Posted 5 days ago
0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Optum is a global organization that delivers care, aided by technology to help millions of people live healthier lives. The work you do with our team will directly improve health outcomes by connecting people with the care, pharmacy benefits, data and resources they need to feel their best. Here, you will find a culture guided by inclusion, talented peers, comprehensive benefits and career development opportunities. Come make an impact on the communities we serve as you help us advance health optimization on a global scale. Join us to start Caring. Connecting. Growing together. Primary Responsibilities Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so Required Qualifications Good experience in building data pipelines using ADF Good experience with programming languages such as Python, PySpark Solid proficiency in SQL and complex queries Demonstrated ability to learn and adapt to new data technologies Proven good skills in Azure Data Processing like Azure data Factory and Azure data Bricks Proven good problem-solving skills Proven good communication skills Proven technical skills - Python, Azure Data Processing Tools Other Requirements Collaborate with team, architects, and product stakeholders to understand the scope and design of a deliverable Participate in product support activities as needed by the team Understand product architecture, features being built and come up with product improvement ideas and POCs Individual contributor for Data Engineering - Data pipelines, Data modelling and Data warehouse Preferred Qualifications Knowledge or experience with Containerization - Docker, Kubernetes. Knowledge or experience with Bigdata or Hadoop ecosystem - Spark, Hive, HBase, Sqoop etc. Experience with APIs and integrating external data sources. Experience in Build or Deployment Automation - Jenkins Knowledge or experience using Microsoft Visio, Power Point. Knowledge on Agile or Scrum At UnitedHealth Group, our mission is to help people live healthier lives and make the health system work better for everyone. We believe everyone-of every race, gender, sexuality, age, location and income-deserves the opportunity to live their healthiest life. Today, however, there are still far too many barriers to good health which are disproportionately experienced by people of color, historically marginalized groups and those with lower incomes. We are committed to mitigating our impact on the environment and enabling and delivering equitable care that addresses health disparities and improves health outcomes — an enterprise priority reflected in our mission.
Posted 5 days ago
7.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Optum is a global organization that delivers care, aided by technology to help millions of people live healthier lives. The work you do with our team will directly improve health outcomes by connecting people with the care, pharmacy benefits, data and resources they need to feel their best. Here, you will find a culture guided by inclusion, talented peers, comprehensive benefits and career development opportunities. Come make an impact on the communities we serve as you help us advance health optimization on a global scale. Join us to start Caring. Connecting. Growing together. Primary Responsibilities Act as a technical lead, guiding the team through complex challenges, and ensuring alignment with architectural and business goals Conduct functional and technical spikes to evaluate new approaches or technologies and effectively communicate findings to the team Operate as an individual contributor, actively participating in the design, development, and deployment of application features Provide support and mentorship to team members, helping them resolve blockers and grow their technical skills Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so Required Qualifications Bachelor’s degree in computer science, Software Engineering, or a related field 7+ years of professional experience in software development Experience with full software development lifecycle (SDLC) Good hands-on experience in java, Spring boot, REST, Mongo DB, Dev Opps Technologies Experience with cloud platforms (AWS, Azure, GCP) Experience with databases (SQL and NoSQL) Good hands-on experience in java, Spring boot, REST, Mongo DB, Dev Opps Technologies Solid understanding of data structures, algorithms, and system design Proven track record of leading software projects or mentoring junior developers Familiarity with DevOps practices and CI/CD pipelines Knowledge of containerization tools (Docker, Kubernetes) At UnitedHealth Group, our mission is to help people live healthier lives and make the health system work better for everyone. We believe everyone - of every race, gender, sexuality, age, location and income - deserves the opportunity to live their healthiest life. Today, however, there are still far too many barriers to good health which are disproportionately experienced by people of color, historically marginalized groups and those with lower incomes. We are committed to mitigating our impact on the environment and enabling and delivering equitable care that addresses health disparities and improves health outcomes - an enterprise priority reflected in our mission. #Gen #Gen #NJP
Posted 5 days ago
8.0 - 10.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Calling all innovators – find your future at Fiserv. We’re Fiserv, a global leader in Fintech and payments, and we move money and information in a way that moves the world. We connect financial institutions, corporations, merchants, and consumers to one another millions of times a day – quickly, reliably, and securely. Any time you swipe your credit card, pay through a mobile app, or withdraw money from the bank, we’re involved. If you want to make an impact on a global scale, come make a difference at Fiserv. Job Title Tech Lead, Data Architecture What does a successful Snowflakes Advisor do? We are seeking a highly skilled and experienced Snowflake Advisor to take ownership of our data warehousing strategy, implementation, maintenance and support. In this role, you will design, develop, and lead the adoption of Snowflake-based solutions to ensure scalable, efficient, and secure data systems that empower our business analytics and decision-making processes. As a Snowflake Advisor, you will collaborate with cross-functional teams, lead data initiatives, and act as the subject matter expert for Snowflake across the organization. What You Will Do Define and implement best practices for data modelling, schema design, query optimization in Snowflakes Develop and manage ETL/ELT workflows to ingest, transform and load data into Snowflakes from various resources Integrate data from diverse systems like databases, API`s, flat files, cloud storage etc. into Snowflakes. Using tools like Streamsets, Informatica or dbt to streamline data transformation processes Monitor or tune Snowflake performance including warehouse sizing, query optimizing and storage management. Manage Snowflakes caching, clustering and partitioning to improve efficiency Analyze and resolve query performance bottlenecks Monitor and resolve data quality issues within the warehouse Collaboration with data analysts, data engineers and business users to understand reporting and analytic needs Work closely with DevOps team for Automation, deployment and monitoring Plan and execute strategies for scaling Snowflakes environments as data volume grows Monitor system health and proactively identify and resolve issues Implement automations for regular tasks Enable seamless integration of Snowflakes with BI Tools like Power BI and create Dashboards Support ad hoc query requests while maintaining system performance Creating and maintaining documentation related to data warehouse architecture, data flow, and processes Providing technical support, troubleshooting, and guidance to users accessing the data warehouse Optimize Snowflakes queries and manage Performance Keeping up to date with emerging trends and technologies in data warehousing and data management Good working knowledge of Linux operating system Working experience on GIT and other repository management solutions Good knowledge of monitoring tools like Dynatrace, Splunk Serve as a technical leader for Snowflakes based projects, ensuring alignment with business goals and timelines Provide mentorship and guidance to team members in Snowflakes implementation, performance tuning and data management Collaborate with stakeholders to define and prioritize data warehousing initiatives and roadmaps. Act as point of contact for Snowflakes related queries, issues and initiatives What You Will Need To Have Must have 8 to 10 years of experience in data management tools like Snowflakes, Streamsets, Informatica Should have experience on monitoring tools like Dynatrace, Splunk. Should have experience on Kubernetes cluster management CloudWatch for monitoring and logging and Linux OS experience Ability to track progress against assigned tasks, report status, and proactively identifies issues. Demonstrate the ability to present information effectively in communications with peers and project management team. Highly Organized and works well in a fast paced, fluid and dynamic environment. What Would Be Great To Have Experience in EKS for managing Kubernetes cluster Containerization technologies such as Docker and Podman AWS CLI for command-line interactions CI/CD pipelines using Harness S3 for storage solutions and IAM for access management Banking and Financial Services experience Knowledge of software development Life cycle best practices Thank You For Considering Employment With Fiserv. Please Apply using your legal name Complete the step-by-step profile and attach your resume (either is acceptable, both are preferable). Our Commitment To Diversity And Inclusion Fiserv is proud to be an Equal Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, national origin, gender, gender identity, sexual orientation, age, disability, protected veteran status, or any other category protected by law. Note To Agencies Fiserv does not accept resume submissions from agencies outside of existing agreements. Please do not send resumes to Fiserv associates. Fiserv is not responsible for any fees associated with unsolicited resume submissions. Warning About Fake Job Posts Please be aware of fraudulent job postings that are not affiliated with Fiserv. Fraudulent job postings may be used by cyber criminals to target your personally identifiable information and/or to steal money or financial information. Any communications from a Fiserv representative will come from a legitimate Fiserv email address.
Posted 5 days ago
4.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Job Title : Assoc Data Engineering Mgr About UHG United Health Group is a leading health care company serving more than 85 million people worldwide. The organization is ranked 5th among Fortune 500 companies. UHG serves its customers through two different platforms - United Health Care (UHC) and Optum. UHC is responsible for providing healthcare coverage and benefits services, while Optum provides information and technology enabled health services. India operations of UHG are aligned to Optum. The Optum Global Analytics Team, part of Optum, is involved in developing broad-based and targeted analytics solutions across different verticals for all lines of business. Qualifications & Requirements Bachelors / 4 year university degree Experience - Between 7-10 Years of Experience with below mentioned Skills Must Have Skills Good understanding of Spark Architecture Good understanding of Data Architecture and solutions in Azure Good Skills in Azure Data Processing skills like Azure data Factory and Azure data Bricks Good experience in building data pipelines using ADF Strong proficiency in SQL and complex queries Good experience with programming languages such as Python, PySpark. Ability to learn and adapt to new data technologies Knowledge & experience with Containerization - Docker, Kubernetes Knowledge/Experience with Bigdata & Hadoop ecosystem - Spark, Hive, HBase, Sqoop etc. Build / Deployment Automation - Jenkins. Good problem-solving skills Good communication skills Act in a strategic capacity as a senior technical expert for all current Azure Cloud based solutions while keeping abreast of industry Cloud solutions Lead project independently & with minimal guidance, including high level client communication and project planning. Good to Have Technical lead for Data Engineering - Data pipelines, Data modelling and Data warehouse. Experience of developing cloud-based API gateways Experience & exposure to API integration frameworks Certified in Azure Data Engineering (AZ-205) Excellent time management, communication, decision making, and presentation skills Position Responsibilities Work under supervision of Data Architects to gather requirements to create Datamodel for Data Science & Business Intelligence projects Work closely with Data Architects to create project plans & list down exhaustive list of activities to be carried out to implement solution Engage in client communications for all important functions including data understanding/exploration, strategizing solutions etc. Document the Metadata information about the data sources used in the project & present that information to team members during team meetings Design & Develop Data Marts, De-normalized views & Data Models for projects Design & Develop Data Quality control processes around the data sets used for analysis Mentoring & Grooming Junior Engineers Lead and Drive Knowledge sharing session within the team Own the technical deliveries of the team Work with Senior team members to develop new capabilities for the team Being Accountable Possess Achievable Drive Passionate about software development Looks forward to build and apply technical and functional skills Focuses on understanding Goals, Priorities and Plans Possess problem solving approach
Posted 5 days ago
4.0 - 5.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
🚀 Coforge Ltd. is hiring an AWS Developer for positions in Hyderabad and Greater Noida. 📍 Job Locations: Hyderabad & Greater Noida 🕒 Experience Required: 4 to 5 Years 📩 Apply Now: Send your CV to Gaurav.2.Kumar@coforge.com 📱 WhatsApp for Queries: 9667427662 ⚡ Immediate Joiners Preferred – Candidates who can join immediately are prioritized for the time-sensitive project. 🔧 Key Responsibilities: - Design and implement cloud infrastructure using AWS CloudFormation or Terraform. - Develop and maintain serverless applications with AWS Lambda. - Write clean, efficient code in Python and Java. - Collaborate with cross-functional teams for cloud-based solutions. - Ensure security, scalability, and performance of cloud applications. - Troubleshoot and optimize existing cloud deployments. ✅ Required Skills & Qualifications: - AWS Certified Cloud Practitioner. - 4–5 years of hands-on AWS development experience. - Proficiency in Python and Java. - Experience with Infrastructure as Code (IaC) using CloudFormation or Terraform. - Strong understanding of AWS Lambda and serverless architecture. - Excellent problem-solving and communication skills. 🌟 Preferred Skills: - Familiarity with CI/CD pipelines and DevOps practices. - Knowledge of other AWS services like S3, EC2, API Gateway, and DynamoDB. - Experience with containerization (Docker, ECS).
Posted 5 days ago
5.0 years
0 Lacs
India
On-site
Job Title: Senior Software Engineer ( Full Stack + .NET + React/Next.js) Location: Offshore Employment Type: Contract to Hire Department: Technology / Engineering Reports To: Director of Application Development About Us We are a forward-thinking, tech-driven organization dedicated to building high-quality, scalable digital solutions. As we continue to grow and modernize our platforms, we are looking for a passionate and skilled Full Stack Software Developer to join our team. If you thrive in a collaborative environment and love working across the Microsoft and JavaScript ecosystems, we’d love to meet you. Role Overview As a Full Stack Software Developer, you’ll be responsible for designing, developing, and maintaining applications across the stack—from backend APIs and services to dynamic, modern front-end interfaces. You will play a key role in building robust web applications using .NET (C#) on the backend and React/Next.js on the frontend. Key Responsibilities Design, develop, test, and deploy scalable web applications using .NET and React/Next.js. Build and maintain RESTful APIs and backend services with ASP.NET Core. Develop modern, responsive front-end user interfaces using React.js and Next.js. Participate in code reviews, architectural discussions, and agile ceremonies. Collaborate with product managers, designers, and other developers to deliver high-impact features. Optimize application performance and ensure application security best practices. Write clean, maintainable, and well-documented code. Contribute to CI/CD pipelines and automated testing. Required Qualifications 5+ years of professional experience in software development. Strong proficiency in C# and .NET/.NET Core. Solid experience with modern front-end technologies: React.js and Next.js. Hands-on experience with SQL Server or other relational databases. Proficiency with HTML5, CSS3, JavaScript/TypeScript, and REST APIs. Experience with version control systems such as Git. Strong understanding of software design patterns and best practices. Familiarity with agile development methodologies (Scrum or Kanban). Preferred Qualifications Experience with Microsoft Azure or other cloud platforms. Familiarity with GraphQL, WebSockets, or real-time data handling. Knowledge of unit testing frameworks (xUnit, Jest, etc.). Experience with containerization (Docker, Kubernetes). Exposure to CI/CD tools such as Azure DevOps or GitHub Actions. Familiarity with security and performance optimization best practices.
Posted 5 days ago
15.0 years
0 Lacs
Madurai, Tamil Nadu, India
Remote
At BairesDev®, we've been leading the way in technology projects for over 15 years. We deliver cutting-edge solutions to giants like Google and the most innovative startups in Silicon Valley. Our diverse 4,000+ team, composed of the world's Top 1% of tech talent, works remotely on roles that drive significant impact worldwide. When you apply for this position, you're taking the first step in a process that goes beyond the ordinary. We aim to align your passions and skills with our vacancies, setting you on a path to exceptional career development and success. DevOps/SRE Engineer at BairesDev We are seeking a DevOps/SRE Engineer to join our growing engineering team. This role requires solid knowledge in cloud infrastructure, site reliability engineering practices, and automation technologies. You will be responsible for implementing and maintaining robust infrastructure solutions while ensuring system reliability, scalability, and security. This position offers the opportunity to work on cutting-edge technologies while solving infrastructure challenges for enterprise clients. What You'll Do: - Implement DevOps and SRE strategies aligned with business objectives. - Manage cloud infrastructure across AWS and Azure with a focus on reliability, security, and cost optimization. - Create and maintain infrastructure as code using Terraform and implement CI/CD pipelines to improve deployment efficiency. - Implement containerization with Docker and orchestration with Kubernetes for scalable application deployments. - Collaborate with development teams to improve system reliability. What we are looking for: - 3+ years of relevant experience in DevOps/SRE roles. - Knowledge of AWS cloud services and architecture. - Experience implementing SRE/DevOps strategies. - Experience with Azure cloud platform and services. - Background in automation and CI/CD implementation using Terraform, GitHub, and Jenkins. - Systems administration skills across Windows and Linux environments. - Experience with containerization using Docker and orchestration with Kubernetes. - Good communication skills and ability to work in a global remote team. - Advanced level of English. How we do make your work (and your life) easier: - 100% remote work (from anywhere). - Excellent compensation in USD or your local currency if preferred - Hardware and software setup for you to work from home. - Flexible hours: create your own schedule. - Paid parental leaves, vacations, and national holidays. - Innovative and multicultural work environment: collaborate and learn from the global Top 1% of talent. - Supportive environment with mentorship, promotions, skill development, and diverse growth opportunities. Apply now and become part of a global team where your unique talents can truly thrive!
Posted 5 days ago
5.0 - 7.0 years
5 - 9 Lacs
Gurugram
Work from Office
About the Opportunity Job TypeApplication 31 July 2025 Title Senior Java Engineer Department Technology Location Gurgaon India Reports To Senior Manager Level 6 Fidelity International offers world class investment solutions and retirement expertise. As a privately owned, independent company, investment is our only business. We are driven by the needs of our clients, not by shareholders. Our vision is to deliver innovative client solutions for a better future. Our people are passionate, engaged, smart and curious, and we give them the independence and the confidence to make a difference. While we take pride in the excellence of our investment solutions and client service, we know we can always do better. We are honest, respectful and make tough calls, challenging the status quo to achieve better outcomes through innovation. Above all else, we always put our clients first. Find out more about what we do, our history, and how you could be a part of our future at careers.fidelityinternational.com About your team GPS Tech stack and engineering ecosystem needs to evolve at a fast pace to cater to strategic technology drivers and to meet end customer experience demands. There is need for a seasoned technologist leader who can lead / drive this space with dedicated energy, passion and inspiration. As a Senior Java Engineer, you will play a key role in a global program, collaborating with senior business leaders, product owners, and technology teams within Fidelity International to deliver and enhance Fidelitys record-keeping platform. Working alongside the business proposition team and technology architects, you will utilize your expertise in core Java, multithreading, collections, data structures, algorithms, and databases to assist with the engineering aspects, design, definition, exploration, and delivery of an end-to-end solution to scale Fidelitys record-keeping platform. You must have a passion for delivering high-quality and scalable solutions with a continued focus on customer needs. You should be both willing to challenge and be challenged on where things can be improved and be comfortable working alongside other engineers in a pair programming environment. About your role This position requires a strong self-starter with solid technical engineering background and influencing skills, who can lead the way, assist development teams with architecture, cloud best practices, trouble shooting and any other technical issues related to implementation of a customer facing proposition. This position requires a strong self-starter with a solid technical engineering background and influencing skills, who can lead the way, assist development teams with architecture, cloud best practices, troubleshooting, and any other technical issues related to the implementation of a customer-facing proposition. Responsible for delivering and providing technical expertise as part of the engineering team from both design and day-to-day coding. Work with product owners to identify new improvements, customer requirements, and follow through to delivery. Ensure delivery in a timely, efficient, and cost-effective manner. Manage stakeholders across various technology and business teams. Ensure that technical solutions are fit for purpose, including functional, non-functional, and support requirements, and aligned with Global Technology Strategies. Be the trusted advisor to the business. Partner closely with architecture, business, and supporting central groups while working within a global team. About you The ideal candidate will have 10+ years experience working as a software engineer with: Expertise in solution design and architecture, with a focus on scalability, performance, and security. Strong coding and development skills in core Java, multithreading, collections, data structures, and algorithms. Experience in technical analysis, integrations, and development. Functional understanding of business requirements and the ability to translate them into technical solutions. Strong problem-solving skills and the ability to diagnose and resolve complex issues. Proficiency in cloud and infrastructure technologies, particularly AWS. Working knowledge of APIs, microservices, and messaging systems. Experience with security and compliance best practices. Hands-on experience with automation and tooling, including CI/CD pipelines and DevOps toolchains like Terraform, Ansible, Jenkins, and Bamboo. Strong analytical and problem-solving skills. Experience developing high-performance, scalable, and reliable applications on the cloud. Familiarity with containerization technologies like Docker and orchestration tools like Kubernetes. Experience with TDD and pair programming best practices. Strong communication skills and a customer-centric focus. Passion for continuous learning, knowledge sharing, and staying up to date with the latest technologies and trends. Key Differentiators for Team Members: GenAI Proficiency: Experience with AI coding assistants, copilots, tooling, generative code, and documentation. Utilize GenAI for prototyping, automated testing, documentation, and intelligent code reviews. Observability First Approach: Implement metrics, tracing, and logs as code, with real-time dashboards for every service. Continuous Improvement: Conduct data-driven retrospectives powered by AI insights. Contract Driven Mindset: Enforce strict API contracts and versioning to enable parallel development and autonomous team delivery. Limitless Innovation Mindset: View legacy systems as launchpads for new solutions, challenge assumptions, and pioneer innovations. Code as Craft: Write clean, modular, future-proof code with a focus on transforming old platforms into next-gen systems. Self-Healing Systems: Build automated anomaly detection and remediation for platforms to recover without human intervention. Continuous Learning & Knowledge Sharing: Lead by example, host internal tech talks, publish open-source patterns, and mentor peers. Agile at Scale: Deliver incremental value rapidly, learn from feedback loops, and iterate without sacrificing stability. Legacy Modernization Mindset: Break down monoliths into modular microservices, delivering business value in small slices. Customer-Centric Engineering: Ground decisions in improving user outcomes, reducing friction, and accelerating time to value. Feel rewarded For starters, well offer you a comprehensive benefits package. Well value your wellbeing and support your development. And well be as flexible as we can about where and when you work finding a balance that works for all of us. Its all part of our commitment to making you feel motivated by the work you do and happy to be part of our team. For more about our work, our approach to dynamic working and how you could build your future here, visit careers.fidelityinternational.com.
Posted 5 days ago
6.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Job Title: Java – Apache Camel Experience Level: 6 to 12 Years. Location: Hyderabad only Employment Type: Full-Time Job Overview: We are seeking a highly skilled Java – Apache Camel Developer with strong experience in building microservices and enterprise integration solutions. The ideal candidate will have 6 to 12+ years of experience , including a minimum of 2 years hands-on experience with Apache Camel , along with expertise in Spring Boot, messaging systems like Kafka, and modern DevOps and cloud (preferably Azure) environments. Key Responsibilities: Develop and maintain microservices using Java, Spring Boot Design and implement enterprise integration patterns using Apache Camel Integrate applications using messaging systems like Kafka Deploy services on cloud platforms (preferably Azure ) using CI/CD pipelines Build and manage containerized applications using Docker and Kubernetes Work in Agile/Scrum teams and participate in sprint planning, code reviews, and continuous improvement Follow best practices in software development, engineering, and secure coding Mandatory Skills: Java (Core + Spring/Spring Boot) – 6 to 12 years Apache Camel – Minimum 2 years hands-on experience Enterprise Integration – Strong understanding of integration patterns Kafka or other messaging systems – Hands-on experience Cloud Platforms – Preferably Azure Containerization & Orchestration – Docker, Kubernetes DevOps Tools – Jenkins, GitLab CI/CD, Azure DevOps Strong grasp of software engineering principles, clean code, and testing
Posted 5 days ago
7.0 - 12.0 years
6 - 9 Lacs
Hyderabad
Work from Office
Understanding of Spark core concepts like RDDs, DataFrames, DataSets, SparkSQL and Spark Streaming. Experience with Spark optimization techniques. Deep knowledge of Delta Lake features like time travel, schema evolution, data partitioning. Ability to design and implement data pipelines using Spark and Delta Lake as the data storage layer. Proficiency in Python/Scala/Java for Spark development and integrate with ETL process. Knowledge of data ingestion techniques from various sources (flat files, CSV, API, database) Understanding of data quality best practices and data validation techniques. Other Skills: Understanding of data warehouse concepts, data modelling techniques. Expertise in Git for code management. Familiarity with CI/CD pipelines and containerization technologies. Nice to have experience using data integration tools like DataStage/Prophecy/Informatica/Ab Initio"
Posted 5 days ago
3.0 - 6.0 years
6 - 9 Lacs
Hyderabad
Work from Office
"Spark & Delta Lake Understanding of Spark core concepts like RDDs, DataFrames, DataSets, SparkSQL and Spark Streaming. Experience with Spark optimization techniques. Deep knowledge of Delta Lake features like time travel, schema evolution, data partitioning. Ability to design and implement data pipelines using Spark and Delta Lake as the data storage layer. Proficiency in Python/Scala/Java for Spark development and integrate with ETL process. Knowledge of data ingestion techniques from various sources (flat files, CSV, API, database) Understanding of data quality best practices and data validation techniques. Other Skills: Understanding of data warehouse concepts, data modelling techniques. Expertise in Git for code management. Familiarity with CI/CD pipelines and containerization technologies. Nice to have experience using data integration tools like DataStage/Prophecy/Informatica/Ab Initio"
Posted 5 days ago
5.0 - 10.0 years
14 - 19 Lacs
Pune
Work from Office
Here at UKG, our purpose is people„¢. Our HR, payroll, and workforce management solutions help organizations unlock happier outcomes for all. And our U Krewers, who build those solutions and support our business, are talented, collaborative, and innovative problem-solvers. We strive to create a culture of belonging and an employee experience that empowers our people- both at work and at home. Our benefits show that we care about the whole you, from adoption and surrogacy assistance to tuition reimbursement and wellness programs. Our employee resource groups provide a welcoming place to land, learn, and connect with those who share your passions and interests. What are you waiting forLearn more at"www.ukg.com/careers" #WeAreUKG" Description & Qualifications Description Site Reliability Engineers at UKG are team members that have a breadth of knowledge encompassing all aspects of service delivery. They develop software solutions to enhance, harden and support our service delivery processes. This can include building and managing CI/CD deployment pipelines, automated testing, capacity planning, performance analysis, monitoring, alerting, chaos engineering and auto remediation. Site Reliability Engineers must have a passion for learning and evolving with current technology trends. They strive to innovate and are relentless in their pursuit of a flawless customer experience. They have an "automate everything" mindset, helping us bring value to our customers by deploying services with incredible speed, consistency and availability. Primary/Essential Duties and Key ResponsibilitiesEngage in and improve the lifecycle of services from conception to EOL, includingsystem design consulting, and capacity planning Define and implement standards and best practices related toSystem Architecture, Service delivery, metrics and the automation of operational tasks Support services, product & engineering teams by providing common tooling and frameworks to deliver increased availability and improved incident response. Improve system performance, application delivery and efficiency through automation, process refinement, postmortem reviews, and in-depth configuration analysis Collaborate closely with engineering professionals within the organization to deliver reliable services Identify and eliminate operational toil by treating operational challenges as a software engineering problem Actively participate in incident response, including on-call responsibilities Qualifications Engineering degree, or a related technical discipline, or equivalent work experience Experience coding in higher-level languages (e.g., Python, Javascript, C++, or Java) Knowledge of Cloud based applications & Containerization Technologies Demonstrated understanding of best practices in metric generation and collection, log aggregation pipelines, time-series databases, and distributed tracing Ability to analyze current technology utilized and engineering practices within the company and develop steps and processes to improve and expand upon them Working experience with industry standards like Terraform, Ansible. (Experience, Education, Certification, License and Training) Must have at least 5 years of hands-on experience working within Engineering or Cloud. Minimum 2 years' experience with public cloud platforms (e.g. GCP, AWS, Azure) Experience in configuration and maintenance of applications & systems infrastructure. Experience with distributed system design and architecture Experience building and managing CI/CD Pipelines EEO Statement Equal Opportunity Employer Ultimate Kronos Group is proud to be an equal opportunity employer and is committed to maintaining a diverse and inclusive work environment. All qualified applicants will receive considerations for employment without regard to race, color, religion, sex, age, disability, marital status, familial status, sexual orientation, pregnancy, genetic information, gender identity, gender expression, national origin, ancestry, citizenship status, veteran status, and any other legally protected status under federal, state, or local anti-discrimination laws." View"The EEO Know Your Rights poster"and its"supplement." View the"Pay Transparency Nondiscrimination Provision UKG participates in E-Verify. View the E-Verify posters"here.
Posted 5 days ago
4.0 - 9.0 years
10 - 15 Lacs
Noida
Work from Office
With 80,000 customers across 150 countries, UKG is the largest U.S.-based private software company in the world. And were only getting started. Ready to bring your bold ideas and collaborative mindset to an organization that still has so much more to build and achieveRead on. Here, we know that youre more than your work. Thats why our benefits help you thrive personally and professionally, from wellness programs and tuition reimbursement to U Choose "” a customizable expense reimbursement program that can be used for more than 200+ needs that best suit you and your family, from student loan repayment, to childcare, to pet insurance. Our inclusive culture, active and engaged employee resource groups, and caring leaders value every voice and support you in doing the best work of your career. If youre passionate about our purpose "” people "”then we cant wait to support whatever gives you purpose. Were united by purpose, inspired by you. We are looking for innovative and dynamic Senior Software Engineers to join our dynamic team. This role provides an opportunity to lead projects and contribute to high-impact software solutions that are used by enterprises and users worldwide. As a Senior Software Engineer, you will be responsible for the design, development, testing, deployment, operation, and maintenance of complex software systems, as well as mentoring junior colleagues. You will work in a collaborative environment, contributing to the technical foundation behind our flagship products and services.We are seeking engineers with diverse specialties and skills to join our dynamic team to innovate and solve complex problems. Our team is looking for exceptional engineers with expertise in the following areas: Front End UI(UI/UX design principles, responsive design, JavaScript frameworks) Platform(CI/CD Pipelines, IAC proficiency, Containerization/Orchestration, Cloud Platforms) Back End(API Development, Database Management, Security Practices, Message Queuing) AI/ML(Machine Learning Frameworks, Data Processing, Algorithm Development, Big Data Technologies, Domain Knowledge)"Responsibilities:" Software DevelopmentWrite clean, maintainable, and efficient code for various software applications and systems.Technical CollaboratorContribute to the design, development, and deployment of complex software applications and systems, ensuring they meet high standards of quality and performance."Project ManagementManage execution and delivery of features and projects, negotiating project priorities and deadlines, ensuring successful and timely completion, with quality."Architectural DesignParticipate in design reviews with peers and stakeholders and in the architectural design of new features and systems, ensuring scalability, reliability, and maintainability." Code ReviewDiligent about reviewing code developed by other engineers, provide feedback and maintain a high bar of technical excellence to ensure code is adhering to industry standard best practices like coding guidelines, elegant, efficient and maintainable code, with observability built from ground up, unit tests etc. TestingBuild testable software, define tests, participate in the testing process, automate tests using tools (e.g., Junit, Selenium) and Design Patterns leveraging the test automation pyramid as the guide.Service Health and QualityMaintain the health and quality of services and incidents, proactively identifying and resolving issues. Utilize service health indicators and telemetry for action providing recommendations to optimize performance. Conduct thorough root cause analysis and drive the implementation of measures to prevent future recurrences." Platform ModelUnderstanding of working in a DevOps Model. Taking ownership from working with product management on requirements to design, develop, test, continuously deploy continuously deploy and operate the software in production.DocumentationProperly document new features, enhancements or fixes to the product, and contributing to training materials.""Minimum Qualifications:"Bachelors degree in Computer Science, Engineering, or a related technical field, or equivalent practical experience."4+ years of professional software development experience." Deep expertise in one or more programming languages such as C#, .NET, Python, Java, or JavaScript.Extensive experience with software development practices and design patterns."Proficiency with version control systems like GitHub and bug/work tracking systems like JIRA."Understanding of cloud technologies and DevOps principles.""Preferred Qualifications:" Experience with cloud platforms like Azure, AWS, or GCP Familiarity with CI/CD pipelines and automation tools Experience with test automation frameworks and tools Knowledge of agile development methodologies Familiarity with developing accessible solutionsDemonstrates strong customer empathy by understanding and addressing user needs and challenges Excellent communication and interpersonal skills, with the ability to work effectively in a collaborative team environment Where were going UKG is on the cusp of something truly special. Worldwide, we already hold the #1 market share position for workforce management and the #2 position for human capital management. Tens of millions of frontline workers start and end their days with our software, with billions of shifts managed annually through UKG solutions today. Yet its our AI-powered product portfolio designed to support customers of all sizes, industries, and geographies that will propel us into an even brighter tomorrow! Disability Accommodation UKGCareers@ukg.com
Posted 5 days ago
6.0 - 12.0 years
0 Lacs
India
On-site
Job Title - Python Backend Developer -GenAI/OpenAI/Fast API Location - Bangalore, Mumbai, Pune, Chennai, Hyderabad, Kolkata Total exp : 6 to 12 years Interview Mode : 1 virtual; 1 face to face Job Description - Role Overview We are seeking a skilled Python Backend Developer with hands-on experience in FastAPI, system architecture, and end-to-end deployments. Exposure to Generative AI technologies, especially Azure OpenAI, is a strong advantage. This role involves building scalable backend systems, integrating AI capabilities, and working with unstructured data formats like PDFs and Word documents Key Responsibilities Design and develop robust backend systems using Python and FastAPI Build and deploy RESTful APIs to support AI-driven applications Implement end-to-end system solutions, including architecture, development, testing, and deployment Integrate Azure OpenAI models and embeddings into enterprise applications (RAG, prompt engineering) Process and transform unstructured/semi-structured content (PDFs, DOCs, JSON) into structured formats Collaborate with cross-functional teams and clients to deliver scalable solutions Ensure security, performance, and reusability of backend components Communicate technical concepts effectively to both technical and non-technical stakeholders Qualifications 6–12 years of backend development experience with Python Strong expertise in FastAPI, API design, and integration Experience with Azure OpenAI, embeddings, and prompt engineering Familiarity with semantic search and handling unstructured data Proven track record of end-to-end system implementation and deployment Excellent communication and client-facing skills Nice to Have Exposure to GraphRAG, Agentic AI, or similar advanced Gen AI architectures Experience with LangChain or other orchestration frameworks Background in document-heavy domains like legal, finance, or compliance Knowledge of CI/CD pipelines, containerization (Docker), and cloud deployments (Azure preferred)
Posted 5 days ago
5.0 - 10.0 years
10 - 14 Lacs
Bengaluru
Work from Office
Project Role : Cloud Platform Engineer Project Role Description : Designs, builds, tests, and deploys cloud application solutions that integrate cloud and non-cloud infrastructure. Can deploy infrastructure and platform environments, creates a proof of architecture to test architecture viability, security and performance. Must have skills : Cloud Infrastructure Good to have skills : NAMinimum 5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As a Cloud Platform Engineer, you will be responsible for designing, building, testing, and deploying cloud application solutions that integrate cloud and non-cloud infrastructure. Your role involves deploying infrastructure and platform environments, creating proof of architecture to test architecture viability, security, and performance. Roles & Responsibilities:- Expected to be an SME.- Collaborate and manage the team to perform.- Responsible for team decisions.- Engage with multiple teams and contribute on key decisions.- Provide solutions to problems for their immediate team and across multiple teams.- Lead the implementation of cloud infrastructure solutions.- Conduct regular performance evaluations and provide feedback to team members.- Stay updated on the latest cloud technologies and trends. The Global Capability Center (GCC) - IT Foundation Platform (ITFP) Network Product Line (NPL) responsible for supporting the Business Network ensuring cost competitive, reliable, and secure operations of Chevron's Network environment globally while also enabling digital capabilities.- Products managed include all Business Network Infrastructure Products and Services globally including Software Defined Networking, Intent Based Networking, Internet First, Wireless, Telephony, Extranet, WAN, Data Center, Security Services and Life Cycle Management. Professional & Technical Skills: - Must To Have Skills: Proficiency in Cloud Infrastructure.- Strong understanding of cloud architecture principles.- Experience with cloud deployment tools like AWS CloudFormation or Azure Resource Manager.- Knowledge of networking concepts in cloud environments.- Hands-on experience with containerization technologies like Docker and Kubernetes. Additional Information:- The candidate should have a minimum of 5 years of experience in Cloud Infrastructure.- This position is based at our Bengaluru office.- A 15 years full time education is required. Qualification 15 years full time education
Posted 5 days ago
35.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
About us One team. Global challenges. Infinite opportunities. At Viasat, we’re on a mission to deliver connections with the capacity to change the world. For more than 35 years, Viasat has helped shape how consumers, businesses, governments and militaries around the globe communicate. We’re looking for people who think big, act fearlessly, and create an inclusive environment that drives positive impact to join our team. What you'll do You are a capable, self-motivated data engineer, proficient in software development methods, including Agile/Scrum. You will be a member of the data engineering team, working on tasks ranging from the design, development, and operations of data warehouses to data platform functions. We enjoy working closely with each other, utilizing an agile development methodology. Priorities can change quickly, but our team members can stay ahead to delight every one of our customers, whether they are internal or external to Viasat. The day-to-day 5-8 years of Experience in Python Programming Proven track record with 5+ years of experience as a data engineer or experience working on data engineering projects/platforms. Working experience with data pipelines & methodologies. Experience with SQL and a wide variety of databases, like PostgreSQL. Good knowledge & experience in distributed computing frameworks like Spark Good experience with source code management systems like GIT Capable of tuning databases, and SQL queries to meet performance objectives. Bachelor’s degree in computer science, computer engineering, or electrical engineering or equivalent technical background and experience Embracing the DevOps philosophy of product development, in addition to your design and development activities, you are also required to provide operational support for the post-production deployment. What you'll need Experience Requirement: 5+ years Education Requirement: Bachelor’s degree Travel Requirement: Up to 10% What will help you on the job Experience with cloud providers like AWS, containerization, and container orchestration frameworks like Kubernetes is preferred. Working experience with data warehouses & ETL tools. Capable of debugging sophisticated issues across various ETL platforms, and databases. Experience with DevOps and tools such as Jenkins, and Ansible is an advantage. Experience with small- to mid-sized software development projects. Experience with Agile Scrum is a plus. Understanding of routing, switching, and basic network communication protocol equal opportunity based on
Posted 5 days ago
4.0 years
0 Lacs
Pune, Maharashtra, India
On-site
The Data Engineer is accountable for developing high quality data products to support the Bank’s regulatory requirements and data driven decision making. A Data Engineer will serve as an example to other team members, work closely with customers, and remove or escalate roadblocks. By applying their knowledge of data architecture standards, data warehousing, data structures, and business intelligence they will contribute to business outcomes on an agile team. Responsibilities Developing and supporting scalable, extensible, and highly available data solutions Deliver on critical business priorities while ensuring alignment with the wider architectural vision Identify and help address potential risks in the data supply chain Follow and contribute to technical standards Design and develop analytical data models Required Qualifications & Work Experience First Class Degree in Engineering/Technology (4-year graduate course) 7 to 12 years’ experience implementing data-intensive solutions using agile methodologies Experience of relational databases and using SQL for data querying, transformation and manipulation Experience of modelling data for analytical consumers Ability to automate and streamline the build, test and deployment of data pipelines Experience in cloud native technologies and patterns A passion for learning new technologies, and a desire for personal growth, through self-study, formal classes, or on-the-job training Excellent communication and problem-solving skills An inclination to mentor; an ability to lead and deliver medium sized components independently T echnical Skills (Must Have) ETL: Design, develop, and maintain scalable and efficient data processing pipelines using Python and Spark. Big Data: Experience of ‘big data’ platforms such as Hadoop, Hive for data storage and processing. Data Warehousing & Database Management: Understanding of Data Warehousing concepts, Any Relational Databases (Oracle, MSSQL, MySQL) including SQL & performance tuning. Data Modeling & Design: Good exposure to data modeling techniques; design, optimization and maintenance of data models and data structures Languages: Proficient in Python programming language. DevOps: Exposure to concepts and enablers - CI/CD platforms, version control, automated quality control management. Technical Skills (Valuable/Good To Have) ETL: Design, develop, and maintain scalable and efficient data processing pipelines using Python and Spark. Big Data: Experience of ‘big data’ platforms such as Hadoop, Hive for data storage and processing. Data Warehousing & Database Management: Understanding of Data Warehousing concepts, Any Relational Databases (Oracle, MSSQL, MySQL) including SQL & performance tuning. Data Modeling & Design: Good exposure to data modeling techniques; design, optimization and maintenance of data models and data structures Languages: Proficient in Python programming language. DevOps: Exposure to concepts and enablers - CI/CD platforms, version control, automated quality control management. Cloud: Experience with AWS Cloud Platform. DevOps: Experience with CI/CD Tools like LSE (Light Speed Enterprise), Jenkins, GitHub. Certification on any of the above topics would be an advantage Data Quality & Controls: Exposure to data validation, cleansing, enrichment and data controls Containerization: Fair understanding of containerization platforms like Docker, Kubernetes File Formats: Exposure in working on Event/File/Table Formats such as Avro, Parquet, Protobuf, Iceberg, Delta Others: Experience of using a Job scheduler e.g., Autosys. Exposure to Business Intelligence tools e.g., Tableau, Power BI ------------------------------------------------------ Job Family Group: Technology ------------------------------------------------------ Job Family: Digital Software Engineering ------------------------------------------------------ Time Type: Full time ------------------------------------------------------ Most Relevant Skills Please see the requirements listed above. ------------------------------------------------------ Other Relevant Skills For complementary skills, please see above and/or contact the recruiter. ------------------------------------------------------ Citi is an equal opportunity employer, and qualified candidates will receive consideration without regard to their race, color, religion, sex, sexual orientation, gender identity, national origin, disability, status as a protected veteran, or any other characteristic protected by law. If you are a person with a disability and need a reasonable accommodation to use our search tools and/or apply for a career opportunity review Accessibility at Citi. View Citi’s EEO Policy Statement and the Know Your Rights poster.
Posted 5 days ago
4.0 years
0 Lacs
Pune, Maharashtra, India
On-site
The Data Engineer is accountable for developing high quality data products to support the Bank’s regulatory requirements and data driven decision making. A Data Engineer will serve as an example to other team members, work closely with customers, and remove or escalate roadblocks. By applying their knowledge of data architecture standards, data warehousing, data structures, and business intelligence they will contribute to business outcomes on an agile team. Responsibilities Developing and supporting scalable, extensible, and highly available data solutions Deliver on critical business priorities while ensuring alignment with the wider architectural vision Identify and help address potential risks in the data supply chain Follow and contribute to technical standards Design and develop analytical data models Required Qualifications & Work Experience First Class Degree in Engineering/Technology (4-year graduate course) 7 to 12 years’ experience implementing data-intensive solutions using agile methodologies Experience of relational databases and using SQL for data querying, transformation and manipulation Experience of modelling data for analytical consumers Ability to automate and streamline the build, test and deployment of data pipelines Experience in cloud native technologies and patterns A passion for learning new technologies, and a desire for personal growth, through self-study, formal classes, or on-the-job training Excellent communication and problem-solving skills An inclination to mentor; an ability to lead and deliver medium sized components independently Technical Skills (Must Have) ETL: Design, develop, and maintain scalable and efficient data processing pipelines using Python and Spark. Big Data: Experience of ‘big data’ platforms such as Hadoop, Hive for data storage and processing. Data Warehousing & Database Management: Understanding of Data Warehousing concepts, Any Relational Databases (Oracle, MSSQL, MySQL) including SQL & performance tuning. Data Modeling & Design: Good exposure to data modeling techniques; design, optimization and maintenance of data models and data structures Languages: Proficient in Python programming language. DevOps: Exposure to concepts and enablers - CI/CD platforms, version control, automated quality control management. Technical Skills (Valuable/Good To Have) ETL: Design, develop, and maintain scalable and efficient data processing pipelines using Python and Spark. Big Data: Experience of ‘big data’ platforms such as Hadoop, Hive for data storage and processing. Data Warehousing & Database Management: Understanding of Data Warehousing concepts, Any Relational Databases (Oracle, MSSQL, MySQL) including SQL & performance tuning. Data Modeling & Design: Good exposure to data modeling techniques; design, optimization and maintenance of data models and data structures Languages: Proficient in Python programming language. DevOps: Exposure to concepts and enablers - CI/CD platforms, version control, automated quality control management. Cloud: Experience with AWS Cloud Platform. DevOps: Experience with CI/CD Tools like LSE (Light Speed Enterprise), Jenkins, GitHub. Certification on any of the above topics would be an advantage Data Quality & Controls: Exposure to data validation, cleansing, enrichment and data controls Containerization: Fair understanding of containerization platforms like Docker, Kubernetes File Formats: Exposure in working on Event/File/Table Formats such as Avro, Parquet, Protobuf, Iceberg, Delta Others: Experience of using a Job scheduler e.g., Autosys. Exposure to Business Intelligence tools e.g., Tableau, Power BI ------------------------------------------------------ Job Family Group: Technology ------------------------------------------------------ Job Family: Digital Software Engineering ------------------------------------------------------ Time Type: Full time ------------------------------------------------------ Most Relevant Skills Please see the requirements listed above. ------------------------------------------------------ Other Relevant Skills For complementary skills, please see above and/or contact the recruiter. ------------------------------------------------------ Citi is an equal opportunity employer, and qualified candidates will receive consideration without regard to their race, color, religion, sex, sexual orientation, gender identity, national origin, disability, status as a protected veteran, or any other characteristic protected by law. If you are a person with a disability and need a reasonable accommodation to use our search tools and/or apply for a career opportunity review Accessibility at Citi. View Citi’s EEO Policy Statement and the Know Your Rights poster.
Posted 5 days ago
30.0 years
0 Lacs
Greater Hyderabad Area
On-site
Overview JAGGAER provides an intelligent Source-to-Pay and Supplier Collaboration Platform that empowers organizations to manage and automate complex processes while enabling a highly resilient, responsible, and integrated supplier base. With 30 years of expertise, we specialize in solving complex procurement and supply chain challenges across various industries. Our 1,200+ global employees are obsessed with ensuring customers get full value from our products - ultimately enhancing and transforming their businesses. For more information, visit www.jaggaer.com We are seeking a highly skilled and motivated Cloud Engineer to join our team. The ideal candidate will have extensive experience with AWS and a strong background in cloud infrastructure, automation, and security. This role requires expertise in managing cloud-first environments, leveraging Infrastructure as Code (IaC) tools, and ensuring compliance with industry frameworks. As a Cloud Engineer, you will play a key role in designing, deploying, and maintaining cloud-based solutions that align with our business objectives and operational requirements. Principal Responsibilities Design, implement, and manage cloud infrastructure primarily in AWS, with limited exposure to Google Cloud Console and Azure (Microsoft 365 and SSO). Develop and maintain Infrastructure as Code (IaC) solutions using Terraform and Spacelift. Automate system administration tasks and configuration management with Ansible. Manage AWS services including but not limited to EC2, S3, RDS, VPC, TransitGateway, Config, WAF, Lambda, IAM, IAM Identity Center, Control Tower, and Redshift. Enhance cloud security posture by implementing best practices aligned with NIST, SOC, PCI, ISO, and CIS baselines. Optimize and manage Linux-based environments (Amazon Linux, RHEL, Ubuntu) and support Windows systems in corporate and production settings. Implement and maintain monitoring, logging, and alerting solutions to ensure system reliability and performance. Collaborate with cross-functional teams to deploy and troubleshoot applications in a cloud environment. Support network and security configurations, including firewalls, VPNs, and identity management. Manage vulnerability scanning and remediation using tools such as Rapid7 (VM, ICS) and endpoint management via Ninja1. Provide documentation, training, and knowledge sharing across teams. Stay updated on industry trends and emerging cloud technologies to drive innovation and efficiency. Position Requirements Bachelor’s degree in Computer Science, Information Technology, or equivalent work experience. 3+ years of experience in cloud engineering or related roles. Experience with scripting languages (Python, Bash, PowerShell). Strong proficiency in AWS services, networking, and security. Hands-on experience with Terraform, Spacelift, and Ansible. Familiarity with compliance frameworks including NIST, SOC, PCI, ISO, and CIS. Expertise in Linux system administration (RHEL, Amazon Linux, Ubuntu) and Windows support. Experience with CI/CD pipelines and automation. Experience with logging and monitoring tools like CloudWatch, Splunk, or Prometheus. Strong troubleshooting skills and ability to diagnose complex cloud-related issues. Goal-oriented with a proactive mindset for continuous improvement and innovation. Excellent communication and collaboration skills. Ability to work independently and in a team-oriented environment. Equal Opportunity/Affirmative Action Employer M/F/D/V Preferred Qualifications AWS certifications (e.g., AWS Certified Solutions Architect, AWS Certified DevOps Engineer). Knowledge of containerization and orchestration tools such as Docker and Kubernetes. Experience with logging and monitoring tools like CloudWatch, Splunk, or Prometheus. Familiarity with Zero Trust security principles and best practices. What We Offer At JAGGAER you’ll find great benefits, empowering culture, flexible work environment, much more! Apply now and be part of our success! Our Values At JAGGAER, our values shape everything we do—from supporting customers and collaborating with teammates to building products and fostering our culture. Be Collaborative: Promote mutual respect, work productively with others, and share responsibility for success. Be Accountable: Own your actions, learn from challenges, and stay proactive to achieve results. Be Adaptable: Embrace change, encourage innovation, and stay effective through significant transitions.
Posted 5 days ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
31458 Jobs | Dublin
Wipro
16542 Jobs | Bengaluru
EY
10788 Jobs | London
Accenture in India
10711 Jobs | Dublin 2
Amazon
8660 Jobs | Seattle,WA
Uplers
8559 Jobs | Ahmedabad
IBM
7988 Jobs | Armonk
Oracle
7535 Jobs | Redwood City
Muthoot FinCorp (MFL)
6170 Jobs | New Delhi
Capgemini
6091 Jobs | Paris,France