Jobs
Interviews

90217 Aws Jobs - Page 15

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

8.0 years

0 Lacs

Gurgaon

On-site

Gurugram, Haryana, India Department Engg Job posted on Jul 29, 2025 Employment type Employee Tbo.com(www.tbo.com) TBO is a global platform that aims to simplify all buying and selling travel needs of travel partners across the world. The proprietary technology platform aims to simplify the demands of the complex world of global travel by seamlessly connecting the highly distributed travel buyers and travel suppliers at scale. The TBO journey began in 2006 with a simple goal to address the evolving needs of travel buyers and suppliers, and what started off as a single product air ticketing company, has today become the leading B2A (Business to Agents) travel portal across the Americas, UK & Europe, Africa, Middle East, India, and Asia Pacific. Today, TBOs product range from air, hotels, rail, holiday packages, car rentals, transfers, sightseeing, cruise, and cargo. Apart from these products, our proprietary platform relies heavily on AI/ML to offer unique listings and products, meeting specific requirements put forth by customers, thus increasing conversions. TBOs approach has always been technology-first and we continue to invest on new innovations and new offerings to make travel easy and simple. TBOs travel APIs are serving large travel ecosystems across the world while the modular architecture of the platform enables new travel products while expanding across new geographies. Why TBO: You will influence & contribute to Building World Largest Technology Led Travel Distribution Network for a $ 9 Trillion global travel business market. We are the emerging leaders in technology led end-to-end travel management, in the B2B space. Physical Presence in 47 countries with business in 110 countries. We are notching up our Gross Transaction Volume (GTV) in several billions and growing much faster than the industry growth rate; backed by a proven and well-established business model. We are reputed for our-long lasting trusted relationships. We stand by our eco system of suppliers and buyers to service the end customer. An open & informal start-up environment which cares. What TBO offers to a Life Traveler in You: Chance to work with CXO Leaders. Our leadership come from top IITs and IIMs; or have led significant business journeys for top brands Indian and global brands. Enhance Your Leadership Acumen. Join the journey to create global scale and World Best. Challenge Yourself to do something path breaking. Be Empowered. The only thing to stop you will be your imagination. Travel space is likely to see significant growth. Witness and shape this space. It will be one exciting journey. Own a wide portfolio of our Platform Business, India. Primary focus will be on top talent attraction, retention, development, and engagement. Talent Acquisition, Business HR, HR Operations & Leaning will report in apart from relevant COE functions connected to these domains. Do You have it in You Take the Voyage (‘Must-Haves’ ) . 8+ years of good hands-on experience in implementing DevOps practices Design, develop and maintain DevOps process comprising several stages including plan, code, build, test, release, deploy, operate and monitor. Platform automation using AWS cloud technologies like CDK, terraform, cloud formation etc. Experience in scripting languages such as Python, Power shell, bash. . Candidates with experience in Linux, nginx, envoy, and fargate are preferred. Good knowledge of Kubernetes (preference AWS EKS), containers using Docker. Design and Develop the CI/CD pipeline using "GitHub Actions" preferably. End to end implementation.

Posted 5 hours ago

Apply

5.0 years

5 - 10 Lacs

Gurgaon

On-site

Manager EXL/M/1435552 ServicesGurgaon Posted On 28 Jul 2025 End Date 11 Sep 2025 Required Experience 5 - 10 Years Basic Section Number Of Positions 1 Band C1 Band Name Manager Cost Code D013514 Campus/Non Campus NON CAMPUS Employment Type Permanent Requisition Type New Max CTC 1500000.0000 - 2500000.0000 Complexity Level Not Applicable Work Type Hybrid – Working Partly From Home And Partly From Office Organisational Group Analytics Sub Group Analytics - UK & Europe Organization Services LOB Analytics - UK & Europe SBU Analytics Country India City Gurgaon Center EXL - Gurgaon Center 38 Skills Skill JAVA HTML Minimum Qualification B.COM Certification No data available Job Description Job Description: Senior Full Stack Developer Position: Senior Full Stack Developer Location: Gurugram Relevant Experience Required: 8+ years Employment Type: Full-time About the Role We are looking for a Senior Full Stack Developer who can build end-to-end web applications with strong expertise in both front-end and back-end development. The role involves working with Django, Node.js, React, and modern database systems (SQL, NoSQL, and Vector Databases), while leveraging real-time data streaming, AI-powered integrations, and cloud-native deployments. The ideal candidate is a hands-on technologist with a passion for modern UI/UX, scalability, and performance optimization. Key Responsibilities Front-End Development Build responsive and user-friendly interfaces using HTML5, CSS3, JavaScript, and React. Implement modern UI frameworks such as Next.js, Tailwind CSS, Bootstrap, or Material-UI. Create interactive charts and dashboards with D3.js, Recharts, Highcharts, or Plotly. Ensure cross-browser compatibility and optimize for performance and accessibility. Collaborate with designers to translate wireframes and prototypes into functional components. Back-End Development Develop RESTful & GraphQL APIs with Django/DRF and Node.js/Express. Design and implement microservices & event-driven architectures. Optimize server performance and ensure secure API integrations. Database & Data Management Work with structured (PostgreSQL, MySQL) and unstructured databases (MongoDB, Cassandra, DynamoDB). Integrate and manage Vector Databases (Pinecone, Milvus, Weaviate, Chroma) for AI-powered search and recommendations. Implement sharding, clustering, caching, and replication strategies for scalability. Manage both transactional and analytical workloads efficiently. Real-Time Processing & Visualization Implement real-time data streaming with Apache Kafka, Pulsar, or Redis Streams. Build live features (e.g., notifications, chat, analytics) using WebSockets & Server-Sent Events (SSE). Visualize large-scale data in real time for dashboards and BI applications. DevOps & Deployment Deploy applications on cloud platforms (AWS, Azure, GCP). Use Docker, Kubernetes, Helm, and Terraform for scalable deployments. Maintain CI/CD pipelines with GitHub Actions, Jenkins, or GitLab CI. Monitor, log, and ensure high availability with Prometheus, Grafana, ELK/EFK stack. Good to have AI & Advanced Capabilities Integrate state-of-the-art AI/ML models for personalization, recommendations, and semantic search. Implement Retrieval-Augmented Generation (RAG) pipelines with embeddings. Work on multimodal data processing (text, image, and video). Preferred Skills & Qualifications Core Stack Front-End: HTML5, CSS3, JavaScript, TypeScript, React, Next.js, Tailwind CSS/Bootstrap/Material-UI Back-End: Python (Django/DRF), Node.js/Express Databases: PostgreSQL, MySQL, MongoDB, Cassandra, DynamoDB, Vector Databases (Pinecone, Milvus, Weaviate, Chroma) APIs: REST, GraphQL, gRPC State-of-the-Art & Advanced Tools Streaming: Apache Kafka, Apache Pulsar, Redis Streams Visualization: D3.js, Highcharts, Plotly, Deck.gl Deployment: Docker, Kubernetes, Helm, Terraform, ArgoCD Cloud: AWS Lambda, Azure Functions, Google Cloud Run Monitoring: Prometheus, Grafana, OpenTelemetry Workflow Workflow Type Back Office

Posted 5 hours ago

Apply

1.0 years

2 - 3 Lacs

Bahādurgarh

On-site

Job Summary With Agenty.com we're building a fully modern user interface for machine learning with hundreds of different types of automotive agents to automate the data collection, processing, validation, translation, cleaning, and the integration with client applications. Agenty is used by customers in retail, healthcare, machine learning, artificial intelligence and many more industries to bring web data to their business. You'll own building this interface in a customer-centric manner, working directly with founding team and global customers to design and implement features throughout the stack. WHAT WE ARE LOOKING FOR : - Experience in .Net web API development (MVC, Asp.net core and C#) - Hands on experience building end to end systems - Experience in writing SQL queries in frameworks like Dapper - Good knowledge of NoSQL databases like MongoDB - Good knowledge of code benchmarking, and unit test in Visual Studio - Knowledge of AWS Cloud or Microsoft Azure is a plus Responsibilities and Duties - Build customer facing web API and integration for global customers in C#, Asp. Net web API & SQL. - Bug fixes, new feature development. - Technical documentation, video tutorials and presentations. - Technical support to solve customers queries Required Experience, Skills and Qualifications Asp.net, Web API, C#, SQL Server Job Types: Full-time, Walk-In Pay: ₹264,000.00 - ₹300,000.00 per year Benefits: Commuter assistance Paid time off Schedule: Day shift Supplemental Pay: Yearly bonus Education: Bachelor's (Preferred) Experience: .NET: 1 year (Preferred) JavaScript: 1 year (Preferred)

Posted 5 hours ago

Apply

0 years

0 Lacs

Gurgaon

On-site

8-10yrs of Operational knowledge of Microservices & .Net Fullstack ,C# or python development , as well as in Docker Experience with PostgreSQL or Oracle Knowledge of AWS S3, and optionally AWS Kinesis and AWS Redshift Real desire to master new technologies Unit test & TDD methodology are assets Team spirit, analytical and synthesis skills Passion, Software Craftsmanship, culture of excellence, Clean Code Fluency in English (multicultural and international team) What Technical Skills You//'ll Develop C# .NET and/or Python Oracle, PostgreSQL AWS ELK (Elasticsearch, Logstash, Kibana) GIT, GitHub, TeamCity, Docker , Ansible

Posted 5 hours ago

Apply

10.0 years

0 Lacs

Kolkata, West Bengal, India

On-site

At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. EY GDS – Data and Analytics (D&A) – Cloud Architect - Manager As part of our EY-GDS D&A (Data and Analytics) team, we help our clients solve complex business challenges with the help of data and technology. We dive deep into data to extract the greatest value and discover opportunities in key business and functions like Banking, Insurance, Manufacturing, Healthcare, Retail, Manufacturing and Auto, Supply Chain, and Finance. The opportunity We’re looking for SeniorManagers (GTM +Cloud/ Big Data Architects) with strong technology and data understanding having proven delivery capability in delivery and pre sales. This is a fantastic opportunity to be part of a leading firm as well as a part of a growing Data and Analytics team. Your Key Responsibilities Have proven experience in driving Analytics GTM/Pre-Sales by collaborating with senior stakeholder/s in the client and partner organization in BCM, WAM, Insurance. Activities will include pipeline building, RFP responses, creating new solutions and offerings, conducting workshops as well as managing in flight projects focused on cloud and big data. Need to work with client in converting business problems/challenges to technical solutions considering security, performance, scalability etc. [ 10- 15 years] Need to understand current & Future state enterprise architecture. Need to contribute in various technical streams during implementation of the project. Provide product and design level technical best practices Interact with senior client technology leaders, understand their business goals, create, architect, propose, develop and deliver technology solutions Define and develop client specific best practices around data management within a Hadoop environment or cloud environment Recommend design alternatives for data ingestion, processing and provisioning layers Design and develop data ingestion programs to process large data sets in Batch mode using HIVE, Pig and Sqoop, Spark Develop data ingestion programs to ingest real-time data from LIVE sources using Apache Kafka, Spark Streaming and related technologies Skills And Attributes For Success Architect in designing highly scalable solutions Azure, AWS and GCP. Strong understanding & familiarity with all Azure/AWS/GCP /Bigdata Ecosystem components Strong understanding of underlying Azure/AWS/GCP Architectural concepts and distributed computing paradigms Hands-on programming experience in Apache Spark using Python/Scala and Spark Streaming Hands on experience with major components like cloud ETLs,Spark, Databricks Experience working with NoSQL in at least one of the data stores - HBase, Cassandra, MongoDB Knowledge of Spark and Kafka integration with multiple Spark jobs to consume messages from multiple Kafka partitions Solid understanding of ETL methodologies in a multi-tiered stack, integrating with Big Data systems like Cloudera and Databricks. Strong understanding of underlying Hadoop Architectural concepts and distributed computing paradigms Experience working with NoSQL in at least one of the data stores - HBase, Cassandra, MongoDB Good knowledge in apache Kafka & Apache Flume Experience in Enterprise grade solution implementations. Experience in performance bench marking enterprise applications Experience in Data security [on the move, at rest] Strong UNIX operating system concepts and shell scripting knowledge To qualify for the role, you must have Flexible and proactive/self-motivated working style with strong personal ownership of problem resolution. Excellent communicator (written and verbal formal and informal). Ability to multi-task under pressure and work independently with minimal supervision. Strong verbal and written communication skills. Must be a team player and enjoy working in a cooperative and collaborative team environment. Adaptable to new technologies and standards. Participate in all aspects of Big Data solution delivery life cycle including analysis, design, development, testing, production deployment, and support Responsible for the evaluation of technical risks and map out mitigation strategies Experience in Data security [on the move, at rest] Experience in performance bench marking enterprise applications Working knowledge in any of the cloud platform, AWS or Azure or GCP Excellent business communication, Consulting, Quality process skills Excellent Consulting Skills Excellence in leading Solution Architecture, Design, Build and Execute for leading clients in Banking, Wealth Asset Management, or Insurance domain. Minimum 7 years hand-on experience in one or more of the above areas. Minimum 10 years industry experience Ideally, you’ll also have Strong project management skills Client management skills Solutioning skills What We Look For People with technical experience and enthusiasm to learn new things in this fast-moving environment What Working At EY Offers At EY, we’re dedicated to helping our clients, from start–ups to Fortune 500 companies — and the work we do with them is as varied as they are. You get to work with inspiring and meaningful projects. Our focus is education and coaching alongside practical experience to ensure your personal development. We value our employees and you will be able to control your own development with an individual progression plan. You will quickly grow into a responsible role with challenging and stimulating assignments. Moreover, you will be part of an interdisciplinary environment that emphasizes high quality and knowledge exchange. Plus, we offer: Support, coaching and feedback from some of the most engaging colleagues around Opportunities to develop new skills and progress your career The freedom and flexibility to handle your role in a way that’s right for you EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today.

Posted 5 hours ago

Apply

3.0 - 7.0 years

0 Lacs

Kolkata, West Bengal, India

On-site

At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. Job Description: EY GDS – Data and Analytics - D and A – Senior – Senior Data Scientist Role Overview: We are seeking a highly skilled and experienced Senior Data Scientist with a minimum of 3 - 7 years of experience in Data Science and Machine Learning, preferably with experience in NLP, Generative AI, LLMs, MLOps, Optimization techniques, and AI solution Architecture. In this role, you will play a key role in the development and implementation of AI solutions, leveraging your technical expertise. The ideal candidate should have a deep understanding of AI technologies and experience in designing and implementing cutting-edge AI models and systems. Additionally, expertise in data engineering, DevOps, and MLOps practices will be valuable in this role. Responsibilities: Your technical responsibilities: Contribute to the design and implementation of state-of-the-art AI solutions. Assist in the development and implementation of AI models and systems, leveraging techniques such as Language Models (LLMs) and generative AI. Collaborate with stakeholders to identify business opportunities and define AI project goals. Stay updated with the latest advancements in generative AI techniques, such as LLMs, and evaluate their potential applications in solving enterprise challenges. Utilize generative AI techniques, such as LLMs, to develop innovative solutions for enterprise industry use cases. Integrate with relevant APIs and libraries, such as Azure Open AI GPT models and Hugging Face Transformers, to leverage pre-trained models and enhance generative AI capabilities. Implement and optimize end-to-end pipelines for generative AI projects, ensuring seamless data processing and model deployment. Utilize vector databases, such as Redis, and NoSQL databases to efficiently handle large-scale generative AI datasets and outputs. Implement similarity search algorithms and techniques to enable efficient and accurate retrieval of relevant information from generative AI outputs. Collaborate with domain experts, stakeholders, and clients to understand specific business requirements and tailor generative AI solutions accordingly. Conduct research and evaluation of advanced AI techniques, including transfer learning, domain adaptation, and model compression, to enhance performance and efficiency. Establish evaluation metrics and methodologies to assess the quality, coherence, and relevance of generative AI outputs for enterprise industry use cases. Ensure compliance with data privacy, security, and ethical considerations in AI applications. Leverage data engineering skills to curate, clean, and preprocess large-scale datasets for generative AI applications. Requirements : Bachelor's or Master's degree in Computer Science, Engineering, or a related field. A Ph.D. is a plus. Minimum 3-7 years of experience in Data Science and Machine Learning. In-depth knowledge of machine learning, deep learning, and generative AI techniques. Proficiency in programming languages such as Python, R, and frameworks like TensorFlow or PyTorch. Strong understanding of NLP techniques and frameworks such as BERT, GPT, or Transformer models. Familiarity with computer vision techniques for image recognition, object detection, or image generation. Experience with cloud platforms such as Azure, AWS, or GCP and deploying AI solutions in a cloud environment. Expertise in data engineering, including data curation, cleaning, and preprocessing. Knowledge of trusted AI practices, ensuring fairness, transparency, and accountability in AI models and systems. Strong collaboration with software engineering and operations teams to ensure seamless integration and deployment of AI models. Excellent problem-solving and analytical skills, with the ability to translate business requirements into technical solutions. Strong communication and interpersonal skills, with the ability to collaborate effectively with stakeholders at various levels. Understanding of data privacy, security, and ethical considerations in AI applications. Track record of driving innovation and staying updated with the latest AI research and advancements. Good to Have Skills: Apply trusted AI practices to ensure fairness, transparency, and accountability in AI models and systems. Utilize optimization tools and techniques, including MIP (Mixed Integer Programming). Drive DevOps and MLOps practices, covering continuous integration, deployment, and monitoring of AI models. Implement CI/CD pipelines for streamlined model deployment and scaling processes. Utilize tools such as Docker, Kubernetes, and Git to build and manage AI pipelines. Apply infrastructure as code (IaC) principles, employing tools like Terraform or CloudFormation. Implement monitoring and logging tools to ensure AI model performance and reliability. Collaborate seamlessly with software engineering and operations teams for efficient AI model integration and deployment. Familiarity with DevOps and MLOps practices, including continuous integration, deployment, and monitoring of AI models. EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today.

Posted 5 hours ago

Apply

4.0 years

0 Lacs

Kolkata, West Bengal, India

On-site

At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. EY GDS – Data and Analytics (D&A) – Data Engineer (Python) As part of our EY-GDS D&A (Data and Analytics) team, we help our clients solve complex business challenges with the help of data and technology. We dive deep into data to extract the greatest value and discover opportunities in key business and functions like Banking, Insurance, Manufacturing, Healthcare, Retail, Manufacturing and Auto, Supply Chain, and Finance. The opportunity We are currently seeking a seasoned Data Engineer with a good experience in Python to join our team of professionals. Key Responsibilities: Develop Data Lake tables leveraging AWS Glue and Spark for efficient data management. Implement data pipelines using Airflow, Kubernetes, and various AWS services Must Have Skills: Experience in deploying and managing data warehouses Advanced proficiency of at least 4 years in Python for data analysis and organization Solid understanding of AWS cloud services Proficient in using Apache Spark for large-scale data processing Skills and Qualifications Needed: Practical experience with Apache Airflow for workflow orchestration Demonstrated ability in designing, building, and optimizing ETL processes, data pipelines, and data architectures Flexible, self-motivated approach with strong commitment to problem resolution. Excellent written and oral communication skills, with the ability to deliver complex information in a clear and effective manner to a range of different audiences. Willingness to work globally and across different cultures, and to participate in all stages of the data solution delivery lifecycle, including pre-studies, design, development, testing, deployment, and support. Nice to have exposure to Apache Druid Familiarity with relational database systems, Desired Work Experience : A degree in computer science or a similar field What Working At EY Offers At EY, we’re dedicated to helping our clients, from start–ups to Fortune 500 companies — and the work we do with them is as varied as they are. You get to work with inspiring and meaningful projects. Our focus is education and coaching alongside practical experience to ensure your personal development. We value our employees and you will be able to control your own development with an individual progression plan. You will quickly grow into a responsible role with challenging and stimulating assignments. Moreover, you will be part of an interdisciplinary environment that emphasizes high quality and knowledge exchange. Plus, we offer: Support, coaching and feedback from some of the most engaging colleagues around Opportunities to develop new skills and progress your career The freedom and flexibility to handle your role in a way that’s right for you EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today.

Posted 5 hours ago

Apply

5.0 years

0 Lacs

Kolkata, West Bengal, India

On-site

At EY, we’re all in to shape your future with confidence. We’ll help you succeed in a globally connected powerhouse of diverse teams and take your career wherever you want it to go. Join EY and help to build a better working world. EY-Consulting - Data and Analytics – Senior - Clinical Integration Developer EY's Consulting Services is a unique, industry-focused business unit that provides a broad range of integrated services that leverage deep industry experience with strong functional and technical capabilities and product knowledge. EY’s financial services practice provides integrated Consulting services to financial institutions and other capital markets participants, including commercial banks, retail banks, investment banks, broker-dealers & asset management firms, and insurance firms from leading Fortune 500 Companies. Within EY’s Consulting Practice, Data and Analytics team solves big, complex issues and capitalize on opportunities to deliver better working outcomes that help expand and safeguard the businesses, now and in the future. This way we help create a compelling business case for embedding the right analytical practice at the heart of client’s decision-making. The opportunity We’re looking for Clinical Trials Integration Developers with 5+ years of experience in software development within the life sciences domain to support the integration of Medidata’s clinical trial systems across the Client R&D environment. This role offers the chance to build robust, compliant integration solutions, contribute to the design of clinical data workflows, and ensure interoperability across critical clinical applications. You will collaborate closely with business and IT teams, playing a key role in enhancing data flow, supporting trial operations, and driving innovation in clinical research. Your Key Responsibilities Design and implement integration solutions to connect Medidata clinical trial systems with other applications within the clinical data landscape. Develop and configure system interfaces using programming languages (e.g., Java, Python, C#) or integration middleware tools (e.g., Informatica, AWS, Apache NiFi). Collaborate with clinical business stakeholders and IT teams to gather requirements, define technical specifications, and ensure interoperability. Create and maintain integration workflows and data mappings that align with clinical trial data standards (e.g., CDISC, SDTM, ADaM). Ensure all development and implementation activities comply with GxP regulations and are aligned with validation best practices. Participate in agile development processes, including sprint planning, code reviews, testing, and deployment. Troubleshoot and resolve integration-related issues, ensuring stable and accurate data flow across systems. Document integration designs, workflows, and technical procedures to support long-term maintainability. Contribute to team knowledge sharing and continuous improvement initiatives within the integration space. Skills And Attributes For Success Apply a hands-on, solution-driven approach to implement integration workflows using code or middleware tools within clinical data environments. Strong communication and problem-solving skills with the ability to collaborate effectively with both technical and clinical teams. Ability to understand and apply clinical data standards and validation requirements when developing system integrations. To qualify for the role, you must have Experience: Minimum 5 years in software development within the life sciences domain, preferably in clinical trial management systems. Education: Must be a graduate preferrable BE/B.Tech/BCA/Bsc IT Technical Skills: Proficiency in programming languages such as Java, Python, or C#, and experience with integration middleware like Informatica, AWS, or Apache NiFi; strong background in API-based system integration. Domain Knowledge: Solid understanding of clinical trial data standards (e.g., CDISC, SDTM, ADaM) and data management processes; experience with agile methodologies and GxP-compliant development environments. Soft Skills: Strong problem-solving abilities, clear communication, and the ability to work collaboratively with clinical and technical stakeholders. Additional Attributes: Capable of implementing integration workflows and mappings, with attention to detail and a focus on delivering compliant and scalable solutions. Ideally, you’ll also have Hands-on experience with ETL tools and clinical data pipeline orchestration frameworks relevant to clinical research. Hands-on experience with clinical R&D platforms such as Oracle Clinical, Medidata RAVE, or other EDC systems. Proven experience leading small integration teams and engaging with cross-functional stakeholders in regulated (GxP) environments. What We Look For A Team of people with commercial acumen, technical experience and enthusiasm to learn new things in this fast-moving environment An opportunity to be a part of market-leading, multi-disciplinary team of 1400 + professionals, in the only integrated global transaction business worldwide. Opportunities to work with EY Consulting practices globally with leading businesses across a range of industries What Working At EY Offers At EY, we’re dedicated to helping our clients, from start–ups to Fortune 500 companies — and the work we do with them is as varied as they are. You get to work with inspiring and meaningful projects. Our focus is education and coaching alongside practical experience to ensure your personal development. We value our employees and you will be able to control your own development with an individual progression plan. You will quickly grow into a responsible role with challenging and stimulating assignments. Moreover, you will be part of an interdisciplinary environment that emphasizes high quality and knowledge exchange. Plus, we offer: Support, coaching and feedback from some of the most engaging colleagues around Opportunities to develop new skills and progress your career The freedom and flexibility to handle your role in a way that’s right for you EY | Building a better working world EY is building a better working world by creating new value for clients, people, society and the planet, while building trust in capital markets. Enabled by data, AI and advanced technology, EY teams help clients shape the future with confidence and develop answers for the most pressing issues of today and tomorrow. EY teams work across a full spectrum of services in assurance, consulting, tax, strategy and transactions. Fueled by sector insights, a globally connected, multi-disciplinary network and diverse ecosystem partners, EY teams can provide services in more than 150 countries and territories.

Posted 5 hours ago

Apply

175.0 years

3 - 10 Lacs

Gurgaon

On-site

At American Express, our culture is built on a 175-year history of innovation, shared values and Leadership Behaviors, and an unwavering commitment to back our customers, communities, and colleagues. As part of Team Amex, you'll experience this powerful backing with comprehensive support for your holistic well-being and many opportunities to learn new skills, develop as a leader, and grow your career. Here, your voice and ideas matter, your work makes an impact, and together, you will help us define the future of American Express. There are hundreds of opportunities to make your mark on technology and life at American Express. Here’s just some of what you’ll be doing: Taking your place as a core member of an agile team driving the latest development practices Writing code and unit tests, working with API specs and automation Identifying opportunities for adopting new technologies Leading a team of engineers that delivers knowledge management solutions to businesses worldwide Key Responsibilities: 1. Perform hands-on design, and development of systems 2. Participate in Solution management discussions to drive solutions for the Enterprise. 3. Ability to solution and implement onto the processes quickly for the team. 4. Perform rapid Pilot/POCs to experiment various engineering optimization and inner-sourcing techniques 5. Function as an agile team member and helps drive consistent development practices w.r.t. tools, common components, and documentation 6. Ability to identify cross functional architecture and engineering re-usable opportunities 7. Optimize the current architecture and code base for various Cornerstone centric data pipelines for better TAT 8. Support engineers in rapid development and deployment by re-designing & revamping the current code base to more global, composable and modularized code 9. Lead data quality issue prevention and remediation; handle exceptions and issues on data quality and run remediation process activities with appropriate data stewards and governance bodies Minimum Qualifications: 1. Computer Science or equivalent degree with minimum of 3+ years of work experience developing software applications 2. Hands on development experience of large scale application development and workflows using Java, SQL, HQL, Python 3. Working experience on Cloud platform like GCP, AWS, Azure. 4. Experience in using Relational databases like Postgres, MySQL, NOSQL databases like Couchbase. 5. Should have worked on REST API design and implementation 6. Experience in development of Continuous Integration and Continuous Deployment pipelines using Jenkins or any equivalent 7. Working knowledge of message queuing, stream processing, and highly scalable ‘big data’ data stores 8. Have excellent written and verbal communication skills and ability to interpret the business needs from the Product Owners 9. Experience supporting and working with cross-functional teams in a dynamic environment We back you with benefits that support your holistic well-being so you can be and deliver your best. This means caring for you and your loved ones' physical, financial, and mental health, as well as providing the flexibility you need to thrive personally and professionally: Competitive base salaries Bonus incentives Support for financial-well-being and retirement Comprehensive medical, dental, vision, life insurance, and disability benefits (depending on location) Flexible working model with hybrid, onsite or virtual arrangements depending on role and business need Generous paid parental leave policies (depending on your location) Free access to global on-site wellness centers staffed with nurses and doctors (depending on location) Free and confidential counseling support through our Healthy Minds program Career development and training opportunities American Express is an equal opportunity employer and makes employment decisions without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, veteran status, disability status, age, or any other status protected by law. Offer of employment with American Express is conditioned upon the successful completion of a background verification check, subject to applicable laws and regulations.

Posted 5 hours ago

Apply

4.0 years

3 - 5 Lacs

Gurgaon

Remote

Job description About this role What are Aladdin and Aladdin Engineering? You will be working on BlackRock's investment operating system called Aladdin , which is used both internally within BlackRock and externally by many financial institutions. Aladdin combines sophisticated risk analytics with comprehensive portfolio management, trading, and operations tools on a single platform. It powers informed decision-making and creates a connective tissue for thousands of users investing worldwide. Our development teams are part of Aladdin Engineering . We collaborate to build the next generation of technology that transforms the way information, people, and technology intersect for global investment firms. We build and package tools that manage trillions in assets and support millions of financial instruments. We perform risk calculations and process millions of transactions for thousands of users worldwide every day. Your Team: The Database Hosting Team is a key part of Platform Hosting Services , which operates under the broader Aladdin Engineering group. Hosting Services is responsible for managing the reliability, stability, and performance of the firm's financial systems, including Aladdin, and ensuring its availability to our business partners and customers. We are a globally distributed team, spanning multiple regions, providing engineering and operational support for online transaction processing, data warehousing, data replication, and distributed data processing platforms. Your Role and Impact: Data is the backbone of any world-class financial institution. The Database Operations Team ensures the resiliency and integrity of that data while providing instantaneous access to a large global user base at BlackRock and across many institutional clients. As specialists in database technology, our team is involved in every aspect of system design, implementation, tuning, and monitoring, using a wide variety of industry-leading database technologies. We also develop code to provide analysis, insights, and automate our solutions at scale. Although our specialty is database technology, to excel in our role, we must understand the environment in which our technology operates. This includes understanding the business needs, application server stack, and interactions between database software, operating systems, and host hardware to deliver the best possible service. We are passionate about performance and innovation. At every level of the firm, we embrace diversity and offer flexibility to enhance work-life balance. Your Responsibilities: The role involves providing operations, development, and project support within the global database environment across various platforms. Key responsibilities include: Operational Support for Database Technology: Engineering, administration, and operations of OLTP, OLAP, data warehousing platforms, and distributed No-SQL systems. Collaboration with infrastructure teams, application developers, and business teams across time zones to deliver high-quality service to Aladdin users. Automation and development of database operational, monitoring, and maintenance toolsets to achieve scalability and efficiency. Database configuration management, capacity and scale management, schema releases, consistency, security, disaster recovery, and audit management. Managing operational incidents, conducting root-cause analysis, resolving critical issues, and mitigating future risks. Assessing issues for severity, troubleshooting proactively, and ensuring timely resolution of critical system issues. Escalating outages when necessary, collaborating with Client Technical Services and other teams, and coordinating with external vendors for support. Project-Based Participation: Involvement in major upgrades and migration/consolidation exercises. Exploring and implementing new product features. Contributing to performance tuning and engineering activities. Contributing to Our Software Toolset: Enhancing monitoring and maintenance utilities in Perl, Python, and Java. Contributing to data captures to enable deeper system analysis. Qualifications: B.E./B.Tech/MCA or another relevant engineering degree from a reputable university. 4+ years of proven experience in Data Administration or a similar role. Skills and Experience: Enthusiasm for acquiring new technical skills. Effective communication with senior management from both IT and business areas. Understanding of large-scale enterprise application setups across data centers/cloud environments. Willingness to work weekends on DBA activities and shift hours. Experience with database platforms like SAP Sybase , Microsoft SQL Server , Apache Cassandra , Cosmos DB, PostgreSQL, and data warehouse platforms such as Snowflake , Greenplum. Exposure to public cloud platforms such as Microsoft Azure, AWS, and Google Cloud. Knowledge of programming languages like Python, Perl, Java, Go; automation tools such as Ansible/AWX; source control systems like GIT and Azure DevOps. Experience with operating systems like Linux and Windows. Strong background in supporting mission-critical applications and performing deep technical analysis. Flexibility to work with various technologies and write high-quality code. Exposure to project management. Passion for interactive troubleshooting, operational support, and innovation. Creativity and a drive to learn new technologies. Data-driven problem-solving skills and a desire to scale technology for future needs. Operating Systems: Familiarity with Linux/Windows. Proficiency with shell commands (grep, find, sed, awk, ls, cp, netstat, etc.). Experience checking system performance metrics like CPU, memory, and disk usage on Unix/Linux. Other Personal Characteristics: Integrity and the highest ethical standards. Ability to quickly adjust to complex data and information, displaying strong learning agility. Self-starter with a commitment to superior performance. Natural curiosity and a desire to always learn. If this excites you, we would love to discuss your potential role on our team! Our benefits To help you stay energized, engaged and inspired, we offer a wide range of benefits including a strong retirement plan, tuition reimbursement, comprehensive healthcare, support for working parents and Flexible Time Off (FTO) so you can relax, recharge and be there for the people you care about. Our hybrid work model BlackRock’s hybrid work model is designed to enable a culture of collaboration and apprenticeship that enriches the experience of our employees, while supporting flexibility for all. Employees are currently required to work at least 4 days in the office per week, with the flexibility to work from home 1 day a week. Some business groups may require more time in the office due to their roles and responsibilities. We remain focused on increasing the impactful moments that arise when we work together in person – aligned with our commitment to performance and innovation. As a new joiner, you can count on this hybrid model to accelerate your learning and onboarding experience here at BlackRock. About BlackRock At BlackRock, we are all connected by one mission: to help more and more people experience financial well-being. Our clients, and the people they serve, are saving for retirement, paying for their children’s educations, buying homes and starting businesses. Their investments also help to strengthen the global economy: support businesses small and large; finance infrastructure projects that connect and power cities; and facilitate innovations that drive progress. This mission would not be possible without our smartest investment – the one we make in our employees. It’s why we’re dedicated to creating an environment where our colleagues feel welcomed, valued and supported with networks, benefits and development opportunities to help them thrive. For additional information on BlackRock, please visit @blackrock | Twitter: @blackrock | LinkedIn: www.linkedin.com/company/blackrock BlackRock is proud to be an Equal Opportunity Employer. We evaluate qualified applicants without regard to age, disability, family status, gender identity, race, religion, sex, sexual orientation and other protected attributes at law. Job Requisition # R255448

Posted 5 hours ago

Apply

10.0 years

0 Lacs

Hyderabad, Telangana, India

Remote

Job Title: Product Manager – AI/Healthcare Platform Location: Remote / Hyderabad (Flexible) Type: Full-time | Immediate Joiner Preferred Role Overview: We are seeking an experienced Product Manager to lead the development and delivery of a cutting-edge AI-driven healthcare platform. The ideal candidate will have a strong background in healthtech and AI/ML product ecosystems , with the ability to manage cross-functional teams and translate complex clinical and technical concepts into user-centric solutions. You will own the product roadmap, coordinate execution across engineering and data science teams, and ensure successful stakeholder alignment and market readiness. Key Responsibilities: Own and drive the product strategy, roadmap, and feature prioritization for AI/LLM-powered healthcare solutions. Work closely with clients, clinicians, and internal stakeholders to define high-impact use cases and product requirements. Translate user needs into detailed user stories, product specs, wireframes , and acceptance criteria . Collaborate with teams across LLM/RAG pipelines, vector DBs , and deployment infrastructure (AWS, Hugging Face, Lambda Labs) to ensure timely and quality delivery. Lead sprint planning, product demos, retrospectives , and release cycles in Agile environments. Define and track KPIs, success metrics , and user feedback loops to guide product decisions. Ensure alignment with HIPAA and healthcare data compliance standards . Partner with SMEs and clinical advisors to validate solutions with authoritative sources (e.g., CDC, WHO, PubMed, NICE). Maintain up-to-date documentation in JIRA, Confluence, Notion , and Miro . Skills & Qualifications: 6–10 years of product management experience , preferably in healthtech , AI/ML , or enterprise SaaS platforms. Strong understanding of Generative AI , LLMs (e.g., GPT, LLaMA), and Retrieval-Augmented Generation (RAG) workflows. Proven track record of managing AI product lifecycles , from concept to delivery. Experience in cloud-based infrastructure and tools (AWS EC2/S3, Hugging Face, LangChain, etc.). Familiarity with agile methodologies , team collaboration tools, and stakeholder reporting. Excellent communication, leadership, and stakeholder management skills. Ability to work across engineering, QA, DevOps, data, and SME teams to execute a unified product vision. Preferred (Not Mandatory): Exposure to HIPAA-compliant product development. Experience with LLM performance evaluation , prompt engineering , and model fine-tuning . Prior work on clinical decision support tools , chatbots , or medical informatics systems . To Apply: Email your resume to ETI.careers@ekshvaku.com with the subject line: Application – Product Manager (AI/Healthcare)

Posted 5 hours ago

Apply

4.0 years

3 - 6 Lacs

Gurgaon

On-site

About ProcDNA: ProcDNA is a global consulting firm. We fuse design thinking with cutting-edge tech to create game-changing Commercial Analytics and Technology solutions for our clients. We're a passionate team of 275+ across 6 offices, all growing and learning together since our launch during the pandemic. Here, you won't be stuck in a cubicle - you'll be out in the open water, shaping the future with brilliant minds. At ProcDNA, innovation isn't just encouraged, it's ingrained in our DNA. What we are looking for: We are looking for Generative AI Developer to join our team of innovative and passionate developers who are creating cutting-edge AI solutions for various domains and challenges. What you'll do: • Good/Basic understanding of full stack development (Python/ MERN stack) and cloud technologies such as Azure and AWS. (important) • DevOPS on cloud and good experience in using services such as Git, Terraform and others. • Develop, test, and deploy AI models using state-of-the-art LLMs, Hugging face LLMs and others and libraries such as Llama index, and Lang chain. • Create custom AI models to address unique challenges and improve existing processes. • Optimize and fine-tune AI models for performance, accuracy, and scalability. • Apply knowledge of Retrieval-Augmented Generation (RAG) and related methodologies to develop sophisticated AI solutions. • Collaborate with cross-functional teams to understand requirements and deliver AI-powered features and enhancements. • Stay abreast of the latest developments in AI, machine learning, and related fields to innovate and improve our offerings continuously. Must have: • Bachelor's degree or higher in Computer Science, Engineering, or related field. • 4+years of experience in AI development, with a strong focus on generative models, NLP, and image processing. • Proficient in Python and other programming languages for AI development. • Solid understanding of machine learning concepts, data processing, feature extraction, text summarization, and image processing techniques • Familiarity with Retrieval-Augmented Generation (RAG) and other cutting-edge AI processes. • Proven experience with OpenAI models, including but not limited to GPT (Generative Pre-trained Transformer), and the ability to develop custom models. • Experience in working with large-scale datasets and cloud platforms. • Excellent problem-solving skills and the ability to work in a fast-paced, dynamic environment. • Strong communication skills and the ability to work collaboratively with a diverse team.

Posted 5 hours ago

Apply

3.0 years

0 Lacs

Gurgaon

On-site

DESCRIPTION Team: EasyShip, Amazon.in (Central Seller Fulfilment) Location: Gurgaon, HR, India At Amazon, we hire the best minds in technology to innovate and build on behalf of our customers. The focus we have on our customers is why we are one of the world’s most beloved brands – customer obsession is part of our company DNA. Our Software Development Engineers (SDEs) use technology to solve complex problems and get to see the impact of their work first-hand. The challenges SDEs solve for at Amazon are big and influence millions of customers, sellers, and products around the world. We are looking for individuals who are passionate about creating new products, features, and services from scratch while managing ambiguity and the pace of a company where development cycles are measured in weeks, not years. If this sounds interesting to you, apply and come chart your own path at Amazon. Applications are reviewed on a rolling basis. For an update on your status, or to confirm your application was submitted successfully, please login to your candidate portal. NOTE: Amazon works with a high volume of applicants, so we appreciate your patience as we review applications Key job responsibilities As a Software Development Engineer (SDE II) within the EasyShip team, you will delve into a distributed service-oriented architecture, harnessing state-of-the-art technologies like AWS, Native iOS, Android, React JS, and others. Your responsibilities encompass creating and implementing fresh system capabilities capable of scaling across multiple marketplaces. You will maintain a comprehensive perspective on software architecture, guiding you through the entire software development journey, from evaluating product requirements to transforming them into actionable engineering tasks. Primary Responsibilities: Collaborate closely with a multidisciplinary team, including Software Development Engineers (SDEs), Product Managers (PMs), Technical Program Managers (TPMs), and Software Development Managers (SDMs) to craft and deliver top-notch technology solutions. Plan, execute, evaluate, deploy, and uphold inventive software solutions that boost service efficiency, resilience, cost-efficiency, and security. Meticulously document your software for the benefit of future developers, ensuring a clear understanding of the features and components you create. Apply best practices in software engineering to maintain the quality of team deliverables at a high standard. Actively engage in defect resolution by identifying root causes and delivering comprehensive solutions. Contribute to the advancement of operational excellence within a rapidly expanding technology stack. Take charge of problem-solving, propose effective solutions, and ensure a seamless handover to the relevant stakeholders. Engage in discussions regarding design, project scope, and prioritization within the team. Participate in code reviews, technical discussions, innovation initiatives, and potentially contribute to patent filings. Play an essential role in the interview process and actively support team training and peer mentorship. Evaluate key performance metrics and actively participate in shaping the evolution of our tech product. BASIC QUALIFICATIONS 3+ years of non-internship professional software development experience 2+ years of non-internship design or architecture (design patterns, reliability and scaling) of new and existing systems experience Experience programming with at least one software programming language PREFERRED QUALIFICATIONS 3+ years of full software development life cycle, including coding standards, code reviews, source control management, build processes, testing, and operations experience Bachelor's degree in computer science or equivalent Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner. Job details IND, HR, Gurugram Software Development

Posted 5 hours ago

Apply

1.0 - 2.0 years

6 - 7 Lacs

Gurgaon

On-site

Gurgaon 1 1 to 2 years Full Time We are seeking a motivated and detail-oriented Junior Associate DevOps with 1 to 2 years of experience in cloud infrastructure, automation, and CI/CD pipelines. The ideal candidate will support our DevOps team in maintaining scalable, reliable, and secure infrastructure to ensure smooth development and deployment processes. Key Responsibilities Assist in the development, maintenance, and improvement of CI/CD pipelines (e.g., GitHub Actions, Jenkins, GitLab CI). Support infrastructure as code (IaC) using tools like Terraform, CloudFormation, or Ansible. Help manage and monitor cloud-based environments (AWS, Azure, or GCP). Collaborate with development and QA teams to ensure seamless code integration and delivery. Participate in routine system maintenance, patching, and deployments. Support containerization and orchestration tools such as Docker and Kubernetes. Monitor system health and troubleshoot infrastructure and deployment issues. Contribute to documentation and standard operating procedures. Required Skills & Qualifications Bachelor’s degree in Computer Science, Engineering, or a related field. 1–2 years of hands-on experience in DevOps or infrastructure roles. Familiarity with cloud platforms (AWS preferred; GCP or Azure a plus). Basic knowledge of CI/CD tools and practices. Experience with version control systems (e.g., Git). Exposure to scripting languages like Bash, Python, or PowerShell. Understanding of containerization (Docker) and basic Kubernetes concepts.

Posted 5 hours ago

Apply

0 years

1 - 1 Lacs

Gurgaon

On-site

Job Title: AI/ML Internship Company: Digirocket Technologies Location: Gurgaon, India Job Type: Internship (Full-time) Duration: 6 months Stipend: [Optional – mention if applicable] About Digirocket Technologies: Digirocket Technologies is a forward-thinking technology company focused on delivering cutting-edge digital solutions. Based in Gurgaon, we specialize in leveraging artificial intelligence, machine learning, and data analytics to empower businesses with intelligent automation and insights. Internship Overview: We are looking for a passionate and self-motivated AI/ML Intern to join our team. This internship offers hands-on experience with real-world machine learning projects, mentorship from experienced engineers, and exposure to the full ML lifecycle—from data preprocessing to model deployment. Key Responsibilities: Assist in building, training, and evaluating machine learning models. Perform data preprocessing, feature engineering, and exploratory data analysis. Collaborate with the development team on deploying AI/ML solutions. Research and experiment with new algorithms and techniques. Document experiments, code, and project outcomes clearly. Requirements: Currently pursuing or recently completed a degree in Computer Science, Data Science, or a related field. Strong understanding of machine learning concepts and algorithms. Proficiency in Python and ML libraries such as Scikit-learn, TensorFlow, or PyTorch. Familiarity with data handling using Pandas, NumPy, etc. Knowledge of basic statistics and data visualization techniques. Strong problem-solving skills and the ability to work independently. Preferred Qualifications: Experience with NLP, computer vision, or deep learning projects. Familiarity with tools like Jupyter, Git, and cloud platforms (AWS, GCP, or Azure). Published projects on GitHub or relevant academic work. What We Offer: Mentorship from experienced AI professionals. Opportunity to work on live projects and make meaningful contributions. Flexible work environment (hybrid/onsite options in Gurgaon). Certificate of internship and potential full-time offer for top performers. How to Apply: Please send your resume, portfolio (if any), and a brief cover letter to hrconnect@digirockett.com Job Type: Internship Contract length: 6 months Pay: ₹10,000.00 - ₹15,000.00 per month Schedule: Evening shift Monday to Friday UK shift US shift Supplemental Pay: Overtime pay Work Location: In person

Posted 5 hours ago

Apply

5.0 - 8.0 years

7 - 10 Lacs

Hyderabad

Work from Office

We are seeking a highly skilled Database Developer & Administrator to join our team. The ideal candidate will possess extensive experience in database management, including backup management, query optimization, and performance tuning. This role will focus on both Relational and NoSQL databases, with strong emphasis on MySQL. Experience with MongoDB and cloud environments, especially AWS and RDS, is a plus. Key Responsibilities: Database Administration: Manage database infrastructure, ensure the security, availability, and performance of databases. Backup Management: Implement and oversee regular database backups, ensuring data integrity and the ability to restore data when needed. Query Optimization: Analyze and optimize complex queries to enhance application performance. Performance Tuning: Regularly monitor and adjust database parameters and performance metrics to ensure the system is running efficiently. Database Design: Design and maintain database schemas, ensuring scalability, data integrity, and compliance with business needs. Cloud Infrastructure: Work with cloud environments like AWS and RDS to ensure smooth database operation in cloud-native systems. Troubleshooting: Diagnose and resolve database issues in a timely manner to minimize downtime and performance impact.Role & responsibilities Required Qualifications: Extensive experience with relational databases (MySQL, PostgreSQL, etc.). Strong expertise in SQL and query performance tuning. Experience with object-oriented databases. Solid knowledge of MySQL administration and optimization. Familiarity with MongoDB is a plus. Experience in database backup and restore management. Cloud experience with AWS and RDS Preferred Skills: Experience with database replication and high-availability setups. Familiarity with NoSQL databases (e.g., MongoDB, Cassandra). Knowledge of database security and compliance best practices. Automation skills using scripting languages like Python or Bash.

Posted 5 hours ago

Apply

5.0 - 7.0 years

7 - 9 Lacs

Hyderabad

Work from Office

Job description We are seeking a skilled Network Engineer with expertise in both on-premise and cloud-based network solutions. The ideal candidate will have hands-on experience in designing, configuring, and maintaining network infrastructures, with a strong focus on security Key Responsibilities: Design, implement, and maintain robust on-premise and cloud-based network architectures. Manage cloud networking solutions on platforms such as AWS, Azure, or Google Cloud. Collaborate with the IT and operations teams to deploy and optimize surveillance and POS systems. Monitor network performance and ensure system availability and reliability. Collaborate with the IT and operations teams to deploy and optimize surveillance and POS systems. Monitor network performance and ensure system availability and reliability.Identify and resolve network issues promptly, ensuring minimal downtime. Develop and maintain documentation for network configurations, processes, and procedures. Stay updated with the latest industry trends and technologies to enhance network efficiency and security. Required Qualifications: Bachelor's degree in Computer Science, Information Technology, or a related field, or equivalent experience. Minimum 5 years of experience in network engineering and architecture. Proficiency in configuring and managing network devices (Cisco, Juniper, etc.). Solid understanding of TCP/IP, VLANs, VPNs, and other networking protocols. Familiarity with setting up and managing surveillance systems and retail POS systems. Strong troubleshooting skills and the ability to work under pressure Excellent communication and documentation skills. Preferred Qualifications: Certifications such as CCNA, CCNP, AWS Certified Advanced Networking, or similar.Experience in the retail industry, particularly with multi-location networks. Knowledge of network security practices and compliance standards.

Posted 5 hours ago

Apply

5.0 - 10.0 years

7 - 12 Lacs

Hyderabad

Work from Office

We are looking for an experienced Senior DevOps Engineer to join our innovative and fast-paced team. The ideal candidate will have a strong background in cloud infrastructure, CI/CD pipelines, and automation. This role offers the opportunity to work with cutting-edge tools and technologies such as AWS, Docker, Kubernetes, Terraform, and Jenkins, while driving the operational efficiency of our development processes in a collaborative environment. Key Responsibilities: Infrastructure as Code: Design, implement, and manage scalable, secure infrastructure using tools like Terraform, Ansible, and CloudFormation. Cloud Management: Deploy and manage applications on AWS, leveraging cloud-native services for performance, cost efficiency, and reliability. CI/CD Pipelines: Develop, maintain, and optimize CI/CD pipelines using tools like Jenkins, GitLab CI, or CircleCI to ensure smooth software delivery. Containerization & Orchestration: Build and manage containerized environments using Docker and orchestrate deployments with Kubernetes. Monitoring & Logging: Implement monitoring and logging solutions (e.g., Prometheus, Grafana, ELK stack) to ensure system reliability and quick troubleshooting. Automation: Develop scripts and tools to automate routine operational tasks, focusing on efficiency and scalability. Security & Compliance: Ensure infrastructure and applications meet security best practices and compliance standards. Collaboration: Work closely with development teams to align infrastructure and deployment strategies with business needs. Incident Management: Troubleshoot production issues, participate in on-call rotations, and ensure high availability of systems. Documentation: Maintain clear and comprehensive documentation for infrastructure, processes, and configurations. Required Qualifications: Extensive experience in DevOps or Site Reliability Engineering (SRE) roles. Strong expertise with AWS or other major cloud platforms. Proficiency in building and managing CI/CD pipelines. Hands-on experience with Docker and Kubernetes. In-depth knowledge of Infrastructure as Code (IaC) tools like Terraform or Ansible. Familiarity with monitoring tools such as Prometheus, Grafana, or New Relic. Strong scripting skills in Python, Bash, or similar languages. Understanding of network protocols, security best practices, and system architecture. Experience in scaling infrastructure to support high-traffic, mission-critical applications. Preferred Skills: Knowledge of multi-cloud environments and hybrid cloud setups. Experience with service mesh technologies (e.g., Istio, Consul). Familiarity with database management in cloud environments. Strong problem-solving skills and a proactive mindset. Ability to mentor junior team members and lead by example. Experience working in Agile/Scrum environments. Skills : - AWS, CI/CD pipelines, Jenkins, Git, ELK, Docker, Kubernetes,Terraform

Posted 5 hours ago

Apply

3.0 - 5.0 years

5 - 7 Lacs

Vadodara

Work from Office

We are seeking a skilled Frontend Developer with hands-on experience in CCTV monitoring, surveillance tools, and face recognition systems. The ideal candidate will have experience integrating Python-based machine learning libraries into frontend systems and is comfortable working with Fast-API for backend interaction. Key Responsibilities: Develop and maintain frontend components for CCTV monitoring and surveillance dashboards. Integrate face recognition features using Python-based libraries into frontend views. Collaborate with backend engineers working on Fast-API to build seamless, responsive interfaces. Visualize real-time video feeds, alert systems, and facial recognition results effectively for users. Optimize frontend performance for real-time surveillance applications. Required Skills & Experience: Proven experience working with CCTV monitoring and surveillance platforms. Familiarity with face recognition libraries (e.g., face recognition, OpenCV,Cmake, Dlib) and how to interface them with frontend applications. Strong knowledge of frontend frameworks such as React.js, Vue.js, or Angular. Experience working with Fast-API or similar Python web frameworks. Experience with video streaming protocols (e.g., RTSP, WebRTC). Background in security or smart surveillance systems. Work with Python-based machine learning libraries (e.g., TensorFlow, Keras, PyTorch, Scikit-learn, Pandas, NumPy, dlib, face_recognition) to build and integrate machine learning models into applications. UI/UX design sensibility for control panels and dashboards. Work closely with cross-functional teams, including backend developers, data scientists, product managers, and designers, to deliver high-quality products. Preferred skills: Cloud Computing & AWS Services Design, develop, and deploy cloud-based applications and services using AWS (Amazon Web Services). Manage and configure cloud infrastructure, including EC2, Lambda, S3 storage, API Gateway, and other relevant AWS services Understanding of real-time data rendering, WebSocket integration, and live video feeds. Knowledge of containerized development (Docker) and CI/CD pipelines. Write clean, maintainable, and efficient code using frontend technologies (HTML5, CSS3, JavaScript, TypeScript, React, React Native, Angular, or Vue.js)

Posted 5 hours ago

Apply

5.0 years

10 - 16 Lacs

Mohali

On-site

Job Opening : Team Lead: MERN Stack Developer Location : Phase 8, Mohali We are looking for an experienced and highly skilled Senior MERN Stack Developer to join our dynamic team. The ideal candidate will have expertise in developing full-stack web applications using MongoDB, Express.js, React.js, and Node.js. As a Senior Developer, you will take ownership of the development process, work closely with other team members, and help guide the architecture of modern web applications. Responsibilities : ● Design, develop, and maintain scalable, high-performance web applications using the MERN stack (MongoDB, Express.js, React.js, Node.js). Lead development teams and mentor junior developers, fostering a collaborative environment and driving best practices. Architect complex, reusable, and maintainable backend and frontend solutions. Write clean, well-documented, and efficient code. Implement RESTful APIs and integrate third-party services. Collaborate with cross-functional teams including product managers, UI/UX designers, and other developers to define and implement features. Conduct code reviews and ensure the team follows development standards and best practices. Optimize application performance and troubleshoot issues as they arise. Stay up to date with emerging web technologies and trends, and provide recommendations for improvements. Manage the deployment process, including handling testing, version control, and continuous integration. Ensure that applications are secure, responsive, and scalable across various devices and platforms. Participate in sprint planning, task prioritization, and project timelines. Requirements : Proven experience (5+ years) as a Full Stack Developer, with a focus on the MERN stack (MongoDB, Express.js, React.js, Node.js). Team handling experience is must Strong proficiency in JavaScript (ES6+), HTML5, CSS3, and modern frontend frameworks (React.js, Redux). Expertise in backend development with Node.js and Express.js. Solid understanding of NoSQL databases, particularly MongoDB. Experience with RESTful API design and integration. Familiarity with cloud services such as AWS, Azure, or Google Cloud. Knowledge of modern CI/CD pipelines, Git, and version control best practices. Understanding of containerization technologies like Docker and Kubernetes is a plus. Experience with unit testing and test-driven development (TDD). Strong problem-solving skills, debugging expertise, and an ability to think critically. Excellent communication skills and the ability to work collaboratively in a fast-paced environment. Bachelor’s degree in Computer Science, Information Technology, or related field (or equivalent practical experience). Job Type: Full-time Pay: ₹1,000,000.00 - ₹1,600,000.00 per year Benefits: Paid time off Work Location: In person

Posted 5 hours ago

Apply

2.0 years

7 - 18 Lacs

Mohali

On-site

Job Description : AI Engineer Overview : We are looking for a highly skilled and experienced AI/ML to join our dynamic team. The ideal candidate will have a robust background in developing web applications using Django and Flask, with expertise in deploying and managing applications on AWS. Proficiency in Django Rest Framework (DRF), a solid understanding of machine learning concepts, and hands-on experience with tools like PyTorch, TensorFlow, and transformer architectures are essential. Key Responsibilities : - Develop and maintain web applications using Django and Flask frameworks. - Design and implement RESTful APIs using Django Rest Framework (DRF). - Deploy, manage, and optimize applications on AWS services, including EC2, S3, RDS, Lambda, and CloudFormation. - Build and integrate APIs for AI/ML models into existing systems. - Create scalable machine learning models using frameworks like PyTorch, TensorFlow, and scikit-learn. - Implement transformer architectures (e.g., BERT, GPT) for NLP and other advanced AI use cases. - Optimize machine learning models through advanced techniques such as hyperparameter tuning, pruning, and quantization. - Deploy and manage machine learning models in production environments using tools like TensorFlow Serving, TorchServe, and AWS SageMaker. - Ensure the scalability, performance, and reliability of applications and deployed models. - Collaborate with cross-functional teams to analyze requirements and deliver effective technical solutions. - Write clean, maintainable, and efficient code following best practices. - Conduct code reviews and provide constructive feedback to peers. - Stay up-to-date with the latest industry trends and technologies, particularly in AI/ML. Required Skills and Qualifications : - Bachelor's degree in Computer Science, Engineering, or a related field. - Proficient in Python with a strong understanding of its ecosystem. - Extensive experience with Django and Flask frameworks. - Hands-on experience with AWS services for application deployment and management. - Strong knowledge of Django Rest Framework (DRF) for building APIs. - Expertise in machine learning frameworks such as PyTorch, TensorFlow, and scikit-learn. - Experience with transformer architectures for NLP and advanced AI solutions. - Solid understanding of SQL and NoSQL databases (e.g., PostgreSQL, MongoDB). - Familiarity with MLOps practices for managing the machine learning lifecycle. - Basic knowledge of front-end technologies (e.g., JavaScript, HTML, CSS) is a plus. - Excellent problem-solving skills and the ability to work independently and as part of a team. - Strong communication skills and the ability to articulate complex technical concepts to non-technical stakeholders. Job Type: Full-time Pay: ₹700,000.00 - ₹1,800,000.00 per year Benefits: Flexible schedule Leave encashment Provident Fund Schedule: Day shift Experience: AI: 2 years (Required) Work Location: In person

Posted 5 hours ago

Apply

3.0 years

4 - 7 Lacs

Mohali

On-site

Experience - 3+ years Location -Mohali (Candidates from nearby location are preferred) Job Overview – DOT NET Developer Skillset: .Net Framework, .Net Core, C#, MVC, Web API, Dapper, Entity framework, MS SQL Server, GIT Angular 12+ (Front-End), HTML, CSS, JavaScript, jQuery Microsoft Azure Technologies (Resource Groups, Application Services, Azure SQL, Azure Web Jobs, Azure Automation & Runbooks, Azure Power Shell etc-2) Experience in MVC designing and coding. Key Responsibilities of candidate will be: · Be a key part of the full product development life cycle of software applications · Ability to prototype solution quickly and analyze / compare multiple solutions and products based on requirements · Always concentrated on Performance, Scalability and Security. · Hands-on experience in building REST based solutions conforming to HTTP standards and knowledge of working of TLS / SSL. · Proficiency with technologies like C#, ASP.NET, ASP.NET MVC, ASP.NET Core, JavaScript, Web API, and REST Web Services. · Working knowledge of various Client-Side Frameworks –jQuery (Kendo UI, AngularJS, ReactJS will be additional knowledge) · Experience with Cloud services offered by MS Azure · Understanding and analyzing the non-functional requirements for the system and how does the architecture reflect them · In depth knowledge of encoding and encryption techniques and their usage. · Extensive knowledge of different industry standards like OAuth2.0, SAML2.0, OpenID Connect, Open API, SOAP, HTTP, HTTPS · Proficiency with the Development tools – Visual Studio · Proficiency with the Application Servers – IIS, Apache (Considering .NET Core framework is Platform independent). · Experience in designing and implementing applications utilizing databases – MySQL, MS SQL Server, Oracle, AWS Aurora, Azure database for MySQL and non-relational databases · Strong Problem Solving and analytical skills · Experience on Micro-Services based architecture is a plus Job Types: Full-time, Permanent Pay: ₹35,000.00 - ₹65,000.00 per month Experience: .NET: 3 years (Preferred) Work Location: In person

Posted 5 hours ago

Apply

3.0 years

6 - 6 Lacs

Mohali

On-site

Role Overview As a Full-Stack Software Engineer, you will be responsible for building and maintaining web applications that power our internal systems. You will work across the stack, contributing to both frontend and backend development, and collaborate with stakeholders across technology, operations and capital markets. This is a great opportunity for someone with strong technical foundations and an interest in working in a fast-paced environment. Key Responsibilities  Design, develop, test, and maintain full-stack features for our internal and external stakeholders  Build intuitive and responsive user interfaces using modern frontend frameworks (Vue.js)  Develop and maintain backend services and APIs to process, store, and retrieve mortgage-related data  Collaborate with cross-functional teams to understand user needs and translate them into scalable software solutions  Support deployment processes and monitor application performance in production environments Required Qualifications  Minimum of 3 years of professional experience in software engineering  Experience with both frontend and backend development  Familiarity with modern frontend frameworks such as React, Vue, or Angular  Proficiency in a server-side language such as Python, JavaScript (Node.js), or similar  Understanding of web application architecture, RESTful APIs, and version control systems (e.g., Git)  Strong problem-solving and collaboration skills Preferred Qualifications  Experience with Python for backend development or scripting  Working knowledge of SQL and relational databases (e.g., SQL Server, PostgreSQL, MySQL)  Exposure to cloud services (e.g., AWS, GCP, Azure)  Experience working in the mortgage lending, asset backed securities, loans, and/or financial technology sector  Familiarity with data pipelines or integration of third-party APIs Job Type: Full-time Pay: ₹600,000.00 - ₹660,000.00 per year Ability to commute/relocate: Mohali, Punjab: Reliably commute or willing to relocate with an employer-provided relocation package (Preferred) Application Question(s): What tools are you using in front end? What tools are you using in back end? Current CTC? notice Period? Experience: dot net: 1 year (Preferred) SQL: 1 year (Preferred) jQuery: 1 year (Required) Work Location: In person

Posted 5 hours ago

Apply

4.0 years

0 Lacs

Mohali district, India

On-site

DevOps Engineer 📍 Mohali 🕒 Experience: 2–4 years About MrProptek MrProptek is the world’s first AI-powered property booking platform , transforming the real estate experience with advanced tools like Oora (our AI property assistant) and Aug (4K/3D virtual walkthroughs). Founded in Chandigarh, we are rapidly scaling across India and globally with a bold mission: to enable users to discover and book properties in under 10 minutes. Role Overview We are looking for a DevOps Engineer who’s excited about building scalable, secure, and high-performance infrastructure. You'll work closely with developers, QA engineers, and product teams to drive automation, streamline deployments, and ensure system reliability. Your Key Responsibilities Design, build, and manage CI/CD pipelines to support rapid development and deployment. Manage and optimize AWS cloud infrastructure . Automate infrastructure using Terraform, Ansible , or CloudFormation . Write Python scripts to automate tasks and improve system efficiency. Monitor system health using tools like Prometheus, Grafana, ELK , or similar. Troubleshoot production issues and maintain system reliability and performance. Collaborate across teams to ensure smooth product delivery and DevOps best practices. What We’re Looking For 2–4 years of hands-on experience in a DevOps or SRE role . Experience with AWS , Azure , or Google Cloud Platform (GCP) . Proficient with CI/CD tools like Jenkins, GitLab CI, or CircleCI. Strong knowledge of Docker and Kubernetes . Scripting experience (preferably in Python ). Familiarity with monitoring, alerting, and logging stacks. Strong problem-solving skills and ownership mindset. Excellent communication and teamwork abilities. Immediate joiners are preferred. Interested candidates can drop their resume at simranjot.kaur@mrproptek.com

Posted 5 hours ago

Apply

3.0 - 4.0 years

6 - 9 Lacs

Mohali

On-site

Employment Type: Full-time About the Role: We are looking for a highly skilled Full-Stack Developer with 3-4 years of experience who is proficient in React, Node.js, PHP, Laravel, WordPress, MySQL, MongoDB, Next.js, and Python. The ideal candidate should have strong communication skills, as they will be required to interact with clients, understand project requirements, and provide technical solutions. Key Responsibilities: Develop and maintain full-stack applications using React, Next.js, and Node.js for front-end and back-end development. Work with PHP frameworks (Laravel, WordPress) to develop web applications and CMS-based solutions. Design and manage databases using MySQL and MongoDB. Develop and integrate REST APIs and work with third-party API integrations. Work with AWS cloud services for deployment, hosting, and server management. Understand project requirements, discuss them with clients, and translate them into technical implementations. Troubleshoot, debug, and optimize applications for performance and scalability. Collaborate with designers, product managers, and other developers to deliver high-quality software solutions. Required Skills & Qualifications: ✅ Front-End: React.js, Next.js, JavaScript, TypeScript, HTML, CSS, TailwindCSS ✅ Back-End: Node.js, PHP, Laravel, WordPress, Python ✅ Databases: MySQL, MongoDB ✅ API Development: REST APIs, third-party API integration ✅ Cloud & DevOps: AWS (EC2, S3, RDS, Lambda) ✅ Version Control: Git, GitHub, Bitbucket ✅ Soft Skills: Excellent verbal and written communication skills. Strong problem-solving and analytical skills. Ability to interact with clients and gather technical requirements. Self-motivated, team player, and eager to learn new technologies. Good to Have: Experience with GraphQL Experience with Docker and Kubernetes Experience with CI/CD pipelines Job Types: Full-time, Permanent Pay: ₹50,211.50 - ₹80,840.48 per month Supplemental Pay: Performance bonus Yearly bonus Work Location: In person

Posted 5 hours ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies