Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
0 years
6 - 10 Lacs
Hyderābād
On-site
Technical Skills: Programming: Proficiency in languages like Python, Bash, or Java is essential. Operating Systems: Deep understanding of Linux/Windows operating systems and networking concepts. Cloud Technologies: Experience with AWS & Azure including services, architecture, and best practices. Containerization and Orchestration: Hands-on experience with Docker, Kubernetes, and related tools. Infrastructure as Code (IaC): Familiarity with tools like Terraform, CloudFormation or Azure CLI. Monitoring and Observability: Experience with tools like Splunk, New Relic or Azure Monitoring. CI/CD: Experience with continuous integration and continuous delivery pipelines, GitHub, GitHub Actions. Knowledge in supporting Azure ML, Databricks and other related SAAS tools. Soft Skills: Problem-Solving: Ability to troubleshoot and debug complex distributed systems independently. Communication: Strong written and verbal communication skills to collaborate with development and operations teams, and able to write documentation like Runbook etc. Specific Experience: Incident Management: Experience with incident response, root cause analysis, and post-incident reviews. Scalability and Performance: Understanding of scalability, availability, and performance monitoring for large-scale systems. Automation: Experience in automating repetitive tasks and workflows. Preferred Qualifications: Experience with specific cloud platforms (AWS, Azure). Certifications related to cloud engineering or DevOps. Experience with microservices architecture including supporting AI/ML solutions. Experience with large-scale system management and configuration. Job Type: Full-time Schedule: Day shift Work Location: In person
Posted 8 hours ago
5.0 - 8.0 years
0 Lacs
Hyderābād
On-site
Req ID: 330804 NTT DATA strives to hire exceptional, innovative and passionate individuals who want to grow with us. If you want to be part of an inclusive, adaptable, and forward-thinking organization, apply now. We are currently seeking a Azure Cloud Engineer to join our team in Bangalore, Karnātaka (IN-KA), India (IN). Domain/ Skills required Banking Domain Azure consultant Bachelor's/Master's degree in Computer Science or Data Science 5 to 8 years of experience in software development and with data structures/algorithms 5 to 7 years of experience with programming language Python or JAVA, database languages (e.g., SQL), and no-sql 5 years of experience in developing large-scale platform, distributed systems or networks, experience with compute technologies, storage architecture Strong understanding of microservices architecture Experience building AKS applications on Azure Strong understanding and experience with Kubernetes for availability and scalability of the application in Azure Kubernetes Service Experience in building and deploying applications with Azure, using third-party tools (e.g., Docker, Kubernetes and Terraform) Experience working with AKS clusters, VNETs, NSGs, Azure storage technologies, Azure container registries etc. Good understanding of building Redis, ElasticSearch, and MongoDB applications Preferably have worked with RabbitMQ E2E understanding of ELK, Azure Monitor, DataDog, Splunk and logging stack Experience with development tools, CI/CD pipelines such as GitLab CI/CD, Artifactory, Cloudbees, Jenkins, Helm, Terraforms etc. Understanding of IAM roles on Azure and integration / configuration experience Preferably working on Data Robot setup or similar applications on Cloud/Azure Functional, integration, and security testing; performance validation About NTT DATA NTT DATA is a $30 billion trusted global innovator of business and technology services. We serve 75% of the Fortune Global 100 and are committed to helping clients innovate, optimize and transform for long term success. As a Global Top Employer, we have diverse experts in more than 50 countries and a robust partner ecosystem of established and start-up companies. Our services include business and technology consulting, data and artificial intelligence, industry solutions, as well as the development, implementation and management of applications, infrastructure and connectivity. We are one of the leading providers of digital and AI infrastructure in the world. NTT DATA is a part of NTT Group, which invests over $3.6 billion each year in R&D to help organizations and society move confidently and sustainably into the digital future. Visit us at us.nttdata.com NTT DATA endeavors to make https://us.nttdata.com accessible to any and all users. If you would like to contact us regarding the accessibility of our website or need assistance completing the application process, please contact us at https://us.nttdata.com/en/contact-us. This contact information is for accommodation requests only and cannot be used to inquire about the status of applications. NTT DATA is an equal opportunity employer. Qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability or protected veteran status. For our EEO Policy Statement, please click here. If you'd like more information on your EEO rights under the law, please click here. For Pay Transparency information, please click here.
Posted 8 hours ago
1.5 years
2 - 3 Lacs
India
On-site
Job description Key Responsibilities Backend Development : Design and develop scalable web applications using Python and Django. Build and maintain RESTful APIs to support frontend functionality and ensure seamless integration. Frontend Development : Develop responsive user interfaces using React.js, HTML5, CSS3, and JavaScript (ES6+). Collaborate with UX/UI designers to implement engaging user experiences. Database Management : Work with relational databases like PostgreSQL or MySQL. Optimize database queries and ensure application performance. Version Control : Utilize Git for version control and collaborate effectively within a team environment. Deployment & Cloud Services : Deploy applications to cloud platforms such as AWS EC2 instances. Familiarity with Docker and containerization is a plus. Agile Development : Participate in Agile/Scrum ceremonies and contribute to improving development processes. ✅ Required Skills Backend : Python, Django, Django Rest Framework (DRF) Frontend : React.js, HTML5, CSS3, JavaScript (ES6+) Database : PostgreSQL, MySQL Version Control : Git Cloud Platforms : AWS (EC2, S3, etc.) Containerization : Docker (optional but beneficial) CI/CD : Jenkins, GitLab CI (optional) Agile Methodologies : Scrum, Kanban Qualifications Education : Bachelor's or Master's degree in Computer Science, Engineering, or a related field. Experience : 1.5 years in full-stack development with a strong focus on Python, Django, and React. Certifications : Optional certifications in cloud platforms (AWS, Azure) or relevant technologies are a plus. Job Types: Full-time, Permanent Pay: ₹20,000.00 - ₹25,000.00 per month Location Type: In-person Schedule: Day shift Work Location: In person Speak with the employer +91 8530474200
Posted 8 hours ago
5.0 - 9.0 years
2 - 9 Lacs
Hyderābād
On-site
Job Description: Experience: Typically requires 5-9 years experience. Role & Responsibilities Works directly with the client user community and business analysts to define and document data requirements for data integration and business intelligence applications. Determines and documents data mapping rules for movement of medium to high complexity data between applications. Adheres to and promotes the use of data administration standards. Supports data selection, extraction, and cleansing for corporate applications, including data warehouse and data marts. Creates and sustains processes, tools, and on-going support structures and processes. Investigates and resolves data issues across platforms and applications, including discrepancies of definition, format and function. Creates and populates meta-data into repositories. May create data models, including robust data definitions, which may be entity-relationship-attribute models, star, or dimensional models. May also create data flow diagrams and process models and integrates models across functional areas and platforms. Works closely with DBAs to transition logical models to physical implementation. May be responsible for employing data mining techniques to achieve data synchronization, redundancy elimination, source identification, data reconciliation, and problem root cause analysis. May also be responsible for quality control and auditing of databases, resolving data problems, and analyzing system changes for quality assurance. Required Skills: Full life-cycle experience on enterprise software development projects. Experience with Snowflake, Databricks, Hadoop and fluent with SQL, Postgres SQL, Vertica, Evenhub, Goldengate, Mongo DB and data analysis techniques. Experience with AI/ML & Python would be an added advantage. Experience in any of the databases SQL (MySQL, Postgres SQL) and NoSQL (Mongo DB, Cassandra, Azure Cosmos DB), Distributed Databases or Big Data (Apache Spark, Cloudera, Vertica), Databricks Snowflake Certification would be an added advantage. Extensive experience in ETL, shell or python scripting, data modelling, analysis, and preparation Experience in Unix/Linux system, files systems, shell scripting. Good to have knowledge on any cloud platforms like AWS, Azure, Snowflake, etc. Good to have experience in BI Reporting tools – Power BI or Tableau Good problem-solving and analytical skills used to resolve technical problems. Ability to work independently but must be a team player. Should be able to drive business decisions and take ownership of their work. Experience in presentation design, development, delivery, and good communication skills to present analytical results and recommendations for action-oriented data driven decisions and associated operational and financial impacts. Sharp technical troubleshooting skills. Experience in presentation design, development, delivery, and communication skills to present analytical results and recommendations for action-oriented data-driven decisions and associated operational and financial impacts. Keep up-to-date with developments in technology and industry norms can help you to produce higher-quality results. Flexible to work from office 3 days (in a week) from 1 pm to 10 pm #SoftwareEngineering Weekly Hours: 40 Time Type: Regular Location: IND:KA:Bengaluru / Innovator Building, Itpb, Whitefield Rd - Adm: Intl Tech Park, Innovator Bldg It is the policy of AT&T to provide equal employment opportunity (EEO) to all persons regardless of age, color, national origin, citizenship status, physical or mental disability, race, religion, creed, gender, sex, sexual orientation, gender identity and/or expression, genetic information, marital status, status with regard to public assistance, veteran status, or any other characteristic protected by federal, state or local law. In addition, AT&T will provide reasonable accommodations for qualified individuals with disabilities. AT&T is a fair chance employer and does not initiate a background check until an offer is made.
Posted 8 hours ago
0 years
6 - 8 Lacs
Hyderābād
On-site
Senior AI/ML Engineer - R01551337 Senior Data Science Lead Primary Skills Hypothesis Testing, T-Test, Z-Test, Regression (Linear, Logistic), Python/PySpark, SAS/SPSS, Statistical analysis and computing, Probabilistic Graph Models, Great Expectation, Evidently AI, Forecasting (Exponential Smoothing, ARIMA, ARIMAX), Tools(KubeFlow, BentoML), Classification (Decision Trees, SVM), ML Frameworks (TensorFlow, PyTorch, Sci-Kit Learn, CNTK, Keras, MXNet), Distance (Hamming Distance, Euclidean Distance, Manhattan Distance), R/ R Studio Job requirements JD is below: The Agentic AI Lead is a pivotal role responsible for driving the research, development, and deployment of semi-autonomous AI agents to solve complex enterprise challenges. This role involves hands-on experience with LangGraph, leading initiatives to build multi-agent AI systems that operate with greater autonomy, adaptability, and decision-making capabilities. The ideal candidate will have deep expertise in LLM orchestration, knowledge graphs, reinforcement learning (RLHF/RLAIF), and real-world AI applications. As a leader in this space, they will be responsible for designing, scaling, and optimizing agentic AI workflows, ensuring alignment with business objectives while pushing the boundaries of next-gen AI automation. ________________________________________ Key Responsibilities 1. Architecting & Scaling Agentic AI Solutions • Design and develop multi-agent AI systems using LangGraph for workflow automation, complex decision-making, and autonomous problem-solving. • Build memory-augmented, context-aware AI agents capable of planning, reasoning, and executing tasks across multiple domains. • Define and implement scalable architectures for LLM-powered agents that seamlessly integrate with enterprise applications. 2. Hands-On Development & Optimization • Develop and optimize agent orchestration workflows using LangGraph, ensuring high performance, modularity, and scalability. • Implement knowledge graphs, vector databases (Pinecone, Weaviate, FAISS), and retrieval-augmented generation (RAG) techniques for enhanced agent reasoning. • Apply reinforcement learning (RLHF/RLAIF) methodologies to fine-tune AI agents for improved decision-making. 3. Driving AI Innovation & Research • Lead cutting-edge AI research in Agentic AI, LangGraph, LLM Orchestration, and Self-improving AI Agents. • Stay ahead of advancements in multi-agent systems, AI planning, and goal-directed behavior, applying best practices to enterprise AI solutions. • Prototype and experiment with self-learning AI agents, enabling autonomous adaptation based on real-time feedback loops. 4. AI Strategy & Business Impact • Translate Agentic AI capabilities into enterprise solutions, driving automation, operational efficiency, and cost savings. • Lead Agentic AI proof-of-concept (PoC) projects that demonstrate tangible business impact and scale successful prototypes into production. 5. Mentorship & Capability Building • Lead and mentor a team of AI Engineers and Data Scientists, fostering deep technical expertise in LangGraph and multi-agent architectures. • Establish best practices for model evaluation, responsible AI, and real-world deployment of autonomous AI agents. ________________________________________
Posted 8 hours ago
3.0 years
0 Lacs
Gurugram, Haryana, India
On-site
About The Role OSTTRA India The Role: Associate I SQA Engineer – Performance Testing The Team: OSTTRA’s Testing Division is a vibrant team of members whose primary focus is on the testing phase of the product development cycle. The group also has a responsibility to champion quality and process improvements in the delivery of all applications that will be onboarded on new (micro-services based) platforms. Platforms are extensively using CD model and cutting-edge technologies viz Git, OpenShift, Elastic Search, Python, Cucumber Framework and Protractor Framework. The strategic direction is the global harmonisation of this platform and onboarding various applications across the BU on this. The QA group is technically supporting cross functional teams through an elaborate QA Infra automation and performance test suite. The QA group works across several locations – UK, India, and US. A key component identified, as part of the move towards harmonisation of the platform, is a non-functional testing capability. The non-functional test team is small, focused, and experienced. The Impact: Together , we build, support, protect and manage high-performance, resilient platforms that process more than 100 million messages a day. Our services are vital to automated trade processing around the globe, managing peak volumes and working with our customers and regulators to ensure the efficient settlement of trades and effective operation of global capital markets What’s in it for you: You will be a key player in the India based QA - NFT team working alongside team members in India, UK, and US on critical Agile/Waterfall projects. Closely coordinating with Developers, Business Analysts and Project Management on day-on-day sprint/monthly release deliveries. You will also be working closely with Product, Application Support and Operation teams to support issues found in UAT and Production. This is an excellent opportunity to be part of a team based out of Gurgaon and to work with colleagues across multiple regions globally. Responsibilities Designing and executing Load Test Scenarios and performing other Non-Functional Testing (particularly Resilience and Failure related). Extending the existing Load Test Framework, including introducing support for new systems and technologies. Specification and development of Test Harnesses, as required. Analysing Test Results and associated statistics (Application, OS, network etc). Automating/Enhancing other NFT activities. Introducing mechanisms to improve the effectiveness and efficiency of existing testing practices. What We’re Looking For University Graduate with a Computer Science or an Engineering degree. Strong Background in Testing Methodology. 3-5 years in a structured testing organization with at least 3 years hands-on Performance Testing experience in a multi-tier environment. Scripting/Programming Skills in PLSQL/Shell RDBMS Skills (preferably Oracle) *NIX Experience (Preferably Linux) Well versed with performance testing concepts and their implementation Experience in performance testing tools (LoadRunner, JMeter, Locust or other) Experience in performance monitoring/diagnostic tools (Wireshark, AppDynamics, Splunk or other) Hands-on experience in identifying performance bottlenecks Some JavaScript). Identifying issues and areas for improvement in tools and scenario design. Requirement and specification documentation analysis. Issue investigation and diagnosis (including raising and progressing defects through the defined lifecycle). Non-Functional Support for any application’s manual and automated Testing Functions The role will interact internally with other global test teams and externally facing teams, such as business analysis, development, delivery, UAT, Technology Account Management (TAM) and application support The Location: Gurgaon, India About Company Statement OSTTRA is a market leader in derivatives post-trade processing, bringing innovation, expertise, processes and networks together to solve the post-trade challenges of global financial markets. OSTTRA operates cross-asset post-trade processing networks, providing a proven suite of Credit Risk, Trade Workflow and Optimisation services. Together these solutions streamline post-trade workflows, enabling firms to connect to counterparties and utilities, manage credit risk, reduce operational risk and optimise processing to drive post-trade efficiencies. OSTTRA was formed in 2021 through the combination of four businesses that have been at the heart of post trade evolution and innovation for the last 20+ years: MarkitServ, Traiana, TriOptima and Reset. These businesses have an exemplary track record of developing and supporting critical market infrastructure and bring together an established community of market participants comprising all trading relationships and paradigms, connected using powerful integration and transformation capabilities. About OSTTRA Candidates should note that OSTTRA is an independent firm, jointly owned by S&P Global and CME Group. As part of the joint venture, S&P Global provides recruitment services to OSTTRA - however, successful candidates will be interviewed and directly employed by OSTTRA, joining our global team of more than 1,200 post trade experts. OSTTRA was formed in 2021 through the combination of four businesses that have been at the heart of post trade evolution and innovation for the last 20+ years: MarkitServ, Traiana, TriOptima and Reset. OSTTRA is a joint venture, owned 50/50 by S&P Global and CME Group. With an outstanding track record of developing and supporting critical market infrastructure, our combined network connects thousands of market participants to streamline end to end workflows - from trade capture at the point of execution, through portfolio optimization, to clearing and settlement. Joining the OSTTRA team is a unique opportunity to help build a bold new business with an outstanding heritage in financial technology, playing a central role in supporting global financial markets. Learn more at www.osttra.com. What’s In It For You? Benefits We take care of you, so you can take care of business. We care about our people. That’s why we provide everything you—and your career—need to thrive at S&P Global. Our Benefits Include Health & Wellness: Health care coverage designed for the mind and body. Flexible Downtime: Generous time off helps keep you energized for your time on. Continuous Learning: Access a wealth of resources to grow your career and learn valuable new skills. Invest in Your Future: Secure your financial future through competitive pay, retirement planning, a continuing education program with a company-matched student loan contribution, and financial wellness programs. Family Friendly Perks: It’s not just about you. S&P Global has perks for your partners and little ones, too, with some best-in class benefits for families. Beyond the Basics: From retail discounts to referral incentive awards—small perks can make a big difference. For more information on benefits by country visit: https://spgbenefits.com/benefit-summaries Recruitment Fraud Alert If you receive an email from a spglobalind.com domain or any other regionally based domains, it is a scam and should be reported to reportfraud@spglobal.com. S&P Global never requires any candidate to pay money for job applications, interviews, offer letters, “pre-employment training” or for equipment/delivery of equipment. Stay informed and protect yourself from recruitment fraud by reviewing our guidelines, fraudulent domains, and how to report suspicious activity here. Equal Opportunity Employer S&P Global is an equal opportunity employer and all qualified candidates will receive consideration for employment without regard to race/ethnicity, color, religion, sex, sexual orientation, gender identity, national origin, age, disability, marital status, military veteran status, unemployment status, or any other status protected by law. Only electronic job submissions will be considered for employment. If you need an accommodation during the application process due to a disability, please send an email to: EEO.Compliance@spglobal.com and your request will be forwarded to the appropriate person. US Candidates Only: The EEO is the Law Poster http://www.dol.gov/ofccp/regs/compliance/posters/pdf/eeopost.pdf describes discrimination protections under federal law. Pay Transparency Nondiscrimination Provision - https://www.dol.gov/sites/dolgov/files/ofccp/pdf/pay-transp_%20English_formattedESQA508c.pdf 20 - Professional (EEO-2 Job Categories-United States of America), BSMGMT203 - Entry Professional (EEO Job Group) Job ID: 315725 Posted On: 2025-06-16 Location: Gurgaon, Haryana, India
Posted 8 hours ago
0 years
0 Lacs
Hyderābād
On-site
Job description Some careers shine brighter than others. If you’re looking for a career that will help you stand out, join HSBC and fulfil your potential. Whether you want a career that could take you to the top, or simply take you in an exciting new direction, HSBC offers opportunities, support and rewards that will take you further. HSBC is one of the largest banking and financial services organisations in the world, with operations in 64 countries and territories. We aim to be where the growth is, enabling businesses to thrive and economies to prosper, and, ultimately, helping people to fulfil their hopes and realise their ambitions. We are currently seeking an experienced professional to join our team in the role of Marketing Title. In this role, you will: Strong technical skills and hands on experience in development and enhancement of ETL/ELT jobs, Shell scripting, database technologies along with understanding of data warehousing concepts. Operate and manage workloads in GCP, ensuring scalability, resilience, and security. Develop and maintain networking solutions, including connectivity between GCP and on-premises environments. Participate in the development of applications using Python/Pyspark Automate operational tasks and tooling with Python and Jenkins. Implement observability best practices Convert requirements into sustainable technical solution through coding best practices. Contribute to full-stack development of tools and dashboards Requirements To be successful in this role, you should meet the following requirements : Extensive experience with GCP (GCS, Bigquery, CloudSQL), Python, PySpark, Airflow Containers & Orchestration (Docker, Kubernetes) CI/CD (Jenkins) Scripting / automation (Python, shell) Observability (Prometheus, Grafana) Scheduling tools control M English (B2+) You’ll achieve more when you join HSBC. www.hsbc.com/careers HSBC is committed to building a culture where all employees are valued, respected and opinions count. We take pride in providing a workplace that fosters continuous professional development, flexible working and opportunities to grow within an inclusive and diverse environment. Personal data held by the Bank relating to employment applications will be used in accordance with our Privacy Statement, which is available on our website. Issued by – HSBC Software Development India
Posted 8 hours ago
0 years
5 - 10 Lacs
Hyderābād
Remote
When you join Verizon You want more out of a career. A place to share your ideas freely — even if they’re daring or different. Where the true you can learn, grow, and thrive. At Verizon, we power and empower how people live, work and play by connecting them to what brings them joy. We do what we love — driving innovation, creativity, and impact in the world. Our V Team is a community of people who anticipate, lead, and believe that listening is where learning begins. In crisis and in celebration, we come together — lifting our communities and building trust in how we show up, everywhere & always. Want in? Join the #VTeamLife. Responsibilities: Designing, developing and maintaining applications and databases by evaluating business needs, analyzing requirements and developing software systems. Executing full software development life cycle (SDLC) – concept, design, build, deploy, test, release and support. Ensuring application development lifecycle is on track and adjusting the plan to meet release timelines. Acting as a strategic thinker joining a high-profile, high-visibility team that powers data science and strategic thinking for Verizon. Implementing information security concepts, practices and procedures to build security solutions. What we’re looking for... You are curious about new technologies and the possibilities they create. You enjoy the challenge of supporting applications while exploring ways to improve upon the technology. You are driven and motivated, with good communication and analytical skills. You’re a sought-after team member that thrives in a dynamic work environment. You have a thirst for working on cutting edge technology with the drive to change the status quo. You'll need to have: Bachelor’s degree or four or more years of work experience. Two or more years of relevant work experience. Two or more years of experience on Frontend/Web technologies and backend services. Development experience in Core / Advanced Java, and J2EE. Experience in Design Patterns. Experience on JMS, Spring Boot (REST & SOAP API skills), experience in Spring frameworks (MVC, IOC, Boot, Batch) and ORM framework like Hibernate. Experience in Oracle & SQL. Experience with Core Java, J2EE, SOA based Web Services, RESTful Web Services. Development experience with Web Services (SOAP and REST). Strong understanding of Artificial Intelligence and Machine Learning implementation. Ability to build, train, evaluate, and deploy machine learning models to address specific business problems. Knowledge of Secure-SDLC Knowledge of cloud-native application development. Effective code review, quality, performance tuning experience. Knowledge of shell scripting (Bash, Python, Ruby, JavaScript, and/or Perl) Familiarity with scripting for MAC systems Even better if you have one or more of the following: Experience with a high-performance, high-availability environment. Strong analytical, debugging skills. Good communication and presentation skills. Relevant certifications. Experience with UI framework. Experience with OWASP rules and mitigate security vulnerabilities using security tools like Fortify, Sonarcube, Blackduck etc. Ability to understand Agile and DevOps tools and technologies. Strong problem solving and debugging skills. If Verizon and this role sound like a fit for you, we encourage you to apply even if you don’t meet every “even better” qualification listed above. Where you’ll be working In this hybrid role, you'll have a defined work location that includes work from home and assigned office days set by your manager. Scheduled Weekly Hours 40 Equal Employment Opportunity Verizon is an equal opportunity employer. We evaluate qualified applicants without regard to race, gender, disability or any other legally protected characteristics.
Posted 8 hours ago
1.0 years
3 - 8 Lacs
Hyderābād
On-site
Req ID: 329983 NTT DATA strives to hire exceptional, innovative and passionate individuals who want to grow with us. If you want to be part of an inclusive, adaptable, and forward-thinking organization, apply now. We are currently seeking a DevOps Engineer to join our team in Bangalore, Karnātaka (IN-KA), India (IN). Once You Are Here, You Will: Quickly be steeped in a suite of custom accelerators which include: Continuously tested & delivered Infrastructure, Pipeline, and Policy as Code Extensible automation & dependency management code frameworks Development of event driven functions, APIs, backend services, Command Language Interfaces, and Self-Service Developer Portals. We eat our own dog food; all your work will be covered with unit, security, governance, and functional testing utilizing appropriate frameworks. Armed with these accelerators you will be among the first on the ground to customize and deploy the delivery platform that will enable the application developers that follow us to rapidly create, demonstrate, and deliver value sprint over sprint for much of the Global Fortune 500. Basic Qualifications: 1+ years scripting experience in bash or powershell 3+ years of experience with GCP engineering, API Pub/Sub, BigQuery, Python, CI/CD, and DevOps process. 3+ years of experience in the design, development, configuration, and implementation of projects in GCP 4+ years of networking experience (security, DNS, VPN, Cloud, load balancing) 4+ years of systems administration experience with at least one operating system (Linux or Windows) 1+ years of experience with one of the following public cloud platforms (AWS or Azure) 1+ years managing, maintaining, or working with SonarQube, Desired Experience & Skills: 1+ years of serverless or container-based architecture experience 1+ years of Infrastructure as code (IAC) experience 3+ years of Azure DevOps Management 3+ years managing, maintaining, or working with SonarQube, JFrog, Jenkins Can autonomously contribute to cloud and application orchestration code and actively involved in peer reviews Can deploy and manage the common tools we use (Jenkins, monitoring, logging, SCM, etc.) from code Advance networking (tcpdump, network flow security analysis, can collect and understand metric between microservices) Some sense of advance authentication technologies (federated auth, SSO) Have a curious mindset with the ability to identify and resolve issues from start to end **** Only include for India Based positions Please note Shift Timing Requirement: 1:30pm IST -10:30 pm IST About NTT DATA NTT DATA is a $30 billion trusted global innovator of business and technology services. We serve 75% of the Fortune Global 100 and are committed to helping clients innovate, optimize and transform for long term success. As a Global Top Employer, we have diverse experts in more than 50 countries and a robust partner ecosystem of established and start-up companies. Our services include business and technology consulting, data and artificial intelligence, industry solutions, as well as the development, implementation and management of applications, infrastructure and connectivity. We are one of the leading providers of digital and AI infrastructure in the world. NTT DATA is a part of NTT Group, which invests over $3.6 billion each year in R&D to help organizations and society move confidently and sustainably into the digital future. Visit us at us.nttdata.com NTT DATA endeavors to make https://us.nttdata.com accessible to any and all users. If you would like to contact us regarding the accessibility of our website or need assistance completing the application process, please contact us at https://us.nttdata.com/en/contact-us. This contact information is for accommodation requests only and cannot be used to inquire about the status of applications. NTT DATA is an equal opportunity employer. Qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability or protected veteran status. For our EEO Policy Statement, please click here. If you'd like more information on your EEO rights under the law, please click here. For Pay Transparency information, please click here.
Posted 8 hours ago
0 years
10 - 16 Lacs
Hyderābād
On-site
Qualifications experience in Python backend development , with a strong understanding of asynchronous programming. Expertise in FastAPI, SQLAlchemy, Pydantic, and scalable backend architecture. Hands-on experience with chatbot development and integrating LLMs (e.g., GPT, Claude, Gemini) into backend systems. Familiarity with LLM orchestration frameworks such as LangChain, OpenAI SDK, or LlamaIndex. Strong knowledge of RAG architecture and real-time document-query pipelines. Experience with API Gateway management and building secure, multi-tenant backend platforms. Proficiency in SQL/NoSQL databases such as PostgreSQL,MYSQL or CosmosDB. Experience integrating Azure AI Services, or equivalents from AWS or Google Cloud. Understanding of role-based access control and secure authentication using JWT/OAuth2. Familiarity with containerization (Docker), CI/CD workflows, and monitoring/logging tools. Immediate joiners apply Job Type: Full-time Pay: ₹1,000,000.00 - ₹1,600,000.00 per year Location Type: In-person Work Location: In person Speak with the employer +91 7356497435
Posted 8 hours ago
10.0 - 12.0 years
7 - 9 Lacs
Hyderābād
On-site
Job description Some careers have more impact than others. If you’re looking for a career where you can make a real impression, join HSBC and discover how valued you’ll be. HSBC is one of the largest banking and financial services organizations in the world, with operations in 62 countries and territories. We aim to be where the growth is, enabling businesses to thrive and economies to prosper, and, ultimately, helping people to fulfil their hopes and realize their ambitions. We are currently seeking an experienced professional to join our team in the role of Assistant Vice President Principal responsibilities The role holder will support Data Finance in maintaining and governing data which have been setup for Data Quality/Remediation process within Finance to support across Finance. Primary responsibilities will include. Develop a strong working understanding of Data Quality processes, Issue Management, and broader Data Management pillars; apply this knowledge to real-world problem-solving, not just theoretical frameworks. Conduct hands-on business-data analysis at both a MVP-level use cases and complex scenarios depending on priority/deliverable timelines. Support (where required) Governance and Data leads driving working group outcomes, hosting data forums, and managing meeting logistics including documentation and ownership of follow-up actions. Take ownership of stakeholder relationships, using strong interpersonal and communication skills to manage expectations, educate, resolve conflicts, and deliver results across cross-functional teams. Demonstrate strong self-management by proactively prioritizing tasks, driving deliverables independently and maintaining accountability. Experience in MI Reporting work with entry level ability code SQL, Python and/or Alteryx. Communicating with stakeholders across functions in diverse locations and establishing working relationships. Absorbing concepts, defining the approach, and developing processes with limited handholding & Pressure to deliver within fixed timelines in an environment of ambiguity. Requirements Postgraduate/Graduate with 10-12 years of experience within Banking with some data experience. Strong analytical and problem-solving skills. Excellent stakeholder engagement and management skills. Ability to navigate within the organization. Experience in analyzing and interpreting large volumes of data and information. Flexibility to work in accordance with Business requirements – this may include working outside of normal hours. Must be experienced in working under pressure on multiple process improvement projects. Understanding of HSBC Group structures, values, processes, and objectives. Experience with using project/collaboration tools such as JIRA, SharePoint, and Confluence. Proficient in MS Excel and Power point. Knowledge in using project/collaboration tools such as JIRA, SharePoint, and Confluence. You’ll achieve more at HSBC HSBC is an equal opportunity employer committed to building a culture where all employees are valued, respected and opinions count. We take pride in providing a workplace that fosters continuous professional development, flexible working and, opportunities to grow within an inclusive and diverse environment. We encourage applications from all suitably qualified persons irrespective of, but not limited to, their gender or genetic information, sexual orientation, ethnicity, religion, social status, medical care leave requirements, political affiliation, people with disabilities, color, national origin, veteran status, etc., We consider all applications based on merit and suitability to the role.” Personal data held by the Bank relating to employment applications will be used in accordance with our Privacy Statement, which is available on our website. ***Issued By HSBC Electronic Data Processing (India) Private LTD***
Posted 8 hours ago
0 years
0 Lacs
Hyderābād
On-site
Matillion is The Data Productivity Cloud. We are on a mission to power the data productivity of our customers and the world, by helping teams get data business ready, faster. Our technology allows customers to load, transform, sync and orchestrate their data. We are looking for passionate, high-integrity individuals to help us scale up our growing business. Together, we can make a dent in the universe bigger than ourselves. With offices in the UK, US and Spain, we are now thrilled to announce the opening of our new office in Hyderabad, India. This marks an exciting milestone in our global expansion, and we are now looking for talented professionals to join us as part of our founding team. Matillion is a fast paced hyper-scale software development company. You will be based in India but working with colleagues globally specifically across the US, the UK and in Hyderabad. The Enterprise data team is responsible for producing Matillion's reporting metrics and KPIs. We work closely with Finance colleagues, the product team and Go To Market to interpret the data that we have and provide actionable insight across the business. The purpose of this role is to: Increase the value of strategic information from the data warehouse, Salesforce, Thoughtspot, and DPC HubDevelop models to help us understand customer behaviour specifically onboarding, product usage and churnUse our rich data assets to streamline operational processes What will you be doing? Run structured experiments to evaluate and improve LLM performance across generative and task-oriented functions. Improving our AI evaluation frameworks Investigating ways generative AI can be used to improve data quality Some more traditional data science predictive models to forecast customer consumption, churn and/or anomaly detection for failing data pipelines Keeping current on the latest research and proposing proof of concept projects to explore how it can assist us Educating other team members to raise the team’s understating of theoretical concepts and the latest developments What are we looking for? Technical/Role Specific - Core Skills MSc, PhD, or equivalent experience in ML, NLP, or a related field. Strong understanding of LLM internals: transformer architecture, tokenization, embeddings, sampling strategies. Python fluency, especially for data science and experimentation (NumPy, Pandas, Matplotlib, Jupyter). Experience with LLM tools (e.g. Hugging Face, LangChain, OpenAI API). Familiarity with prompt engineering and structured evaluation of generative outputs. Technical/Role Specific - Preferrable Skills Any experience of reinforcement learning techniques, even if on a small scale Experience of model evaluation fine tuning, model distillation, instruction tuning or transfer learning agentic systems (tool use / agentic frameworks) implementing guardrails RAG architecture design and vector search Understanding of Model failure modes, fallback strategies, and error recovery LLM performance optimization tradeoffs (latency, cost, accuracy) Uncertainty estimation and confidence scoring in generative systems Privacy and compliance considerations in AI for SaaS Personal Capabilities Enthusiasm to learn Able to coach and mentor those around you to increase their knowledge Comfort working across teams Ability to translate requirements between data scientists (research focus) and software engineers (product focus) Clear communication of challenges, timelines, and possible solutions to stakeholders Adaptability to rapid changes in a dynamic tech startup environment Enthusiasm for learning new AI/ML Ops tools, libraries, and techniques Proactive at diagnosing problems to understand a true root cause Willingness to experiment and to look for ways to optimise existing systems Willingness to pivot quickly in a rapidly evolving generative AI landscape Matillion has fostered a culture that is collaborative, fast-paced, ambitious, and transparent, and an environment where people genuinely care about their colleagues and communities. Our 6 core values guide how we work together and with our customers and partners. We operate a truly flexible and hybrid working culture that promotes work-life balance, and are proud to be able to offer the following benefits: Company Equity 27 days paid time off 12 days of Company Holiday 5 days paid volunteering leave Group Mediclaim (GMC) Enhanced parental leave policies MacBook Pro Access to various tools to aid your career development More about Matillion Thousands of enterprises including Cisco, DocuSign, Slack, and TUI trust Matillion technology to load, transform, sync, and orchestrate their data for a wide range of use cases from insights and operational analytics, to data science, machine learning, and AI. With over $300M raised from top Silicon Valley investors, we are on a mission to power the data productivity of our customers and the world. We are passionate about doing things in a smart, considerate way. We’re honoured to be named a great place to work for several years running by multiple industry research firms. We are dual headquartered in Manchester, UK and Denver, Colorado. We are keen to hear from prospective Matillioners, so even if you don’t feel you match all the criteria please apply and a member of our Talent Acquisition team will be in touch. Alternatively, if you are interested in Matillion but don't see a suitable role, please email talent@matillion.com. Matillion is an equal opportunity employer. We celebrate diversity and we are committed to creating an inclusive environment for all of our team. Matillion prohibits discrimination and harassment of any type. Matillion does not discriminate on the basis of race, colour, religion, age, sex, national origin, disability status, genetics, sexual orientation, gender identity or expression, or any other characteristic protected by law.
Posted 8 hours ago
2.0 - 5.0 years
0 Lacs
Hyderābād
On-site
Summary Responsible to support all statistical programming/data review reporting and analytics development aspects of assigned studies or project-level activities. About the Role Major accountabilities: 1.Produce and track reports for various line functions within Global Drug Development, used for ongoing monitoring of clinical data 2. Provide understandable and actionable reports on clinical data and monitoring of clinical data for key stakeholders 3. Facilitate interaction with end-user on creating specifications and working with programmers or performing the programming activities for successful delivery 4. To provide quantitative analytical support to the global program teams, including providing support on analyzing reports 5. Support the planning, execution and close-out of Clinical Programs/Trials. 6. Support the management in collation and delivery of analytics reports for critical decision making 7. Create, file and maintain appropriate documentation 8. Work with the internal SMEs and key stakeholders in providing analysis and interpretation of clinical program/trial operational data 9. Provide necessary training to end-user on best / appropriate and consistent use of various data review tools 10. Program reports of various complexity from documented requirements, within the clinical reporting systems using SQL, PL/SQL, C#, VB script, SAS, Python, R 11. Good understanding of Novartis Clinical Data Standards and its implementation for creation of reports specifications or reports output Key performance indicators: A Quality and timeliness of deliverables 2. Revisions to deliverables caused by logic or programming errors 3. Customer feedback and satisfaction Minimum Requirements: Work Experience: 2-5 years of experience in clinical review and reporting programming, business analytics and/or clinical trial setup, gained in the pharmaceutical industry, CRO or Life Science related industry as well as the following: 2. Strong knowledge of programming languages (SQL, PL/SQL, C#, VB script, SAS, Python, R) 3. Knowledge of Data Review and/or Business Intelligence tools (such as Spotfire, JReview) 4. Understanding of clinical data management systems and/or relational databases as applied to clinical trials 5. Attention to detail, quality, time management and customer focus 6. Ability to translate technical concepts for nontechnical users in the areas of clinical database design and data review reporting development 7. Strong verbal and written communication skills to work with our global partners and customers 8. Understanding of Drug Development Process, ICHGCP, CDISC standards and Health Authority guidelines and regulation Why Novartis: Helping people with disease and their families takes more than innovative science. It takes a community of smart, passionate people like you. Collaborating, supporting and inspiring each other. Combining to achieve breakthroughs that change patients’ lives. Ready to create a brighter future together? https://www.novartis.com/about/strategy/people-and-culture Join our Novartis Network: Not the right Novartis role for you? Sign up to our talent community to stay connected and learn about suitable career opportunities as soon as they come up: https://talentnetwork.novartis.com/network Benefits and Rewards: Read our handbook to learn about all the ways we’ll help you thrive personally and professionally: https://www.novartis.com/careers/benefits-rewards Division Development Business Unit Innovative Medicines Location India Site Hyderabad (Office) Company / Legal Entity IN10 (FCRS = IN010) Novartis Healthcare Private Limited Functional Area Research & Development Job Type Full time Employment Type Regular Shift Work No Accessibility and accommodation Novartis is committed to working with and providing reasonable accommodation to individuals with disabilities. If, because of a medical condition or disability, you need a reasonable accommodation for any part of the recruitment process, or in order to perform the essential functions of a position, please send an e-mail to [email protected] and let us know the nature of your request and your contact information. Please include the job requisition number in your message. Novartis is committed to building an outstanding, inclusive work environment and diverse teams' representative of the patients and communities we serve.
Posted 8 hours ago
5.0 - 10.0 years
5 - 9 Lacs
Hyderābād
On-site
Requirements: Experience: 5 to 10 Years Strong Knowledge of reporting packages Business objects Advanced Excel with hands-on experience in VBA/macros. Proficiency in Power BI, Power Automate, and Power Apps. Strong SQL scripting and experience in working with relational databases. Experience in data modeling, cleansing, and performance tuning for large datasets. Python for data analysis and automation (e.g., pandas, matplotlib, openpyxl) Exposure to Microsoft Azure (Data Factory, Synapse, or Logic Apps) is highly desirable. -Not Mandatory Qualification: Bachelor’s degree in computer science, Engineering, or a related field Job Category: Software Engineer Job Location: Hyderabad Job Country: India
Posted 8 hours ago
0 years
3 - 16 Lacs
India
On-site
We are looking for an experienced and creative AI Expert to drive innovation in academics and marketing using AI tools. You will work with both the digital marketing and academic teams to build intelligent tools, automate communication, and create AI-generated content. Responsibilities Build AI tools for academic dashboards, student analytics, and content automation Use tools like ChatGPT, Midjourney, Canva, and Runway ML to create educational visuals and videos Automate lead generation and marketing campaigns Support faculty with prompt engineering and AI-based learning tools Required Skills Python, Machine Learning (basic to intermediate) Prompt Engineering, OpenAI APIs, Canva, Zapier Experience with image/video generation tools (Midjourney, DALL·E, Runway ML) Phone: +91-9959271353 Website: www.rankridge.com Job Types: Full-time, Permanent Pay: ₹393,916.97 - ₹1,602,233.17 per year Work Location: In person
Posted 8 hours ago
0.0 - 3.0 years
6 - 7 Lacs
Hyderābād
On-site
Company Overview Arcesium is a global financial technology firm that solves complex data-driven challenges faced by some of the world’s most sophisticated financial institutions. We constantly innovate our platform and capabilities to meet tomorrow’s challenges, anticipate the risks our clients encounter, and design advanced solutions to help our clients achieve transformational business outcomes. Financial technology is a high-growth industry as change and innovation continue to disrupt the status-quo and prompt major transformation. Arcesium is at a particularly interesting time in our own growth as we look to leverage our successfully established market position and expand operations in pursuit of strategic new business opportunities. We value intellectual curiosity, proactive ownership, and collaboration with colleagues, and we empower you to meaningfully contribute from day one and accelerate your professional development. We are looking for a candidate for our Counterparty Connectivity Group which is the first point of contact in Arcesium for counterparties, services, third-party vendors and clients in all matters relating to establishing connectivity and enable systematic data transmission from client counterparts to our platform. The group plays an important role in facilitating availability of inbound and outbound data for various downstream teams (Middle Office, Trade Accounting & Operations, Treasury etc.) to perform critical operational workflows like Reconciliations, Collateral Management, Pricing. The group has an opening for a vital role where we establish intelligent reconciliation logic between the external data and client data, provide operational support, and product support to our internal and external clients What You'll Do Liaise with multiple stakeholders like clients, custodians, prime brokers, admins to enable systematic data workflow to Arcesium’s platform. Analyze and interpret external/internal data and establish logic for daily cash, position, trades, collateral, margin etc. statements; consumed by Arcesium’s downstream teams/systems and assist in resolving any variances. Become familiar with the reconciliation logic for simple and complex asset classes and assist clients (internal & external) with any related queries. Point of Contact for assigned clients to handle various queries and provide a timely resolution as per defined SLAs. Manage the client mandate from counterparty connectivity perspective and ensure all queries get addressed in an expected and timely manner. Work 75% supporting existing operations and 25% on new build required for existing client operations. Perform regular review of client’s workflow to look for business process improvements. Understand the upstream and downstream workflow and liaise with various development teams to meet the business requirements related to any new reconciliation setup or new implementation. Achieve good understanding of the internal control environment and ensure compliance with internal policies and procedures. Communicate regularly and effectively and must have ability to build effective relationships with key internal and external stakeholders. What You'll Need Bachelor’s/Master’s degree in financial discipline with 0-3 years of relevant experience Proficient in Industry and domain knowledge Technical skillsets including knowledge of python and other languages would be preferable. An outstanding academic background with a passion to work in fast paced and dynamic environment Exceptional verbal and written communication skills Ability to multitask and manage multiple deliverables A high level of personal maturity and a collaborative attitude Exceptional interpersonal skills Should be able to demonstrate delivering high-quality work under stringent deadlines Work experience between 0 to 3 years if applicable, in financial services - supporting Middle and back-office trade life cycle and/or interacting with Hedge Fund managers and other asset managers will be an advantage. Arcesium and its affiliates do not discriminate in employment matters on the basis of race, color, religion, gender, gender identity, pregnancy, national origin, age, military service eligibility, veteran status, sexual orientation, marital status, disability, or any other category protected by law. Note that for us, this is more than just a legal boilerplate. We are genuinely committed to these principles, which form an important part of our corporate culture, and are eager to hear from extraordinarily well qualified individuals having a wide range of backgrounds and personal characteristics.
Posted 8 hours ago
0 years
5 - 5 Lacs
Hyderābād
On-site
On-site PWC TechnologiesFull time Hyderabad, Telangana, India Description Skill required Candidate must possess at least a Bachelor of Science /Bachelor of Computer Application /Bachelor of Engineering/Technology. As a plus, a certification in the QC field, like the ISTQB certification. As a plus, Master of Computer Application/Computer Science, Master of Science or Master of Engineering/Technology in Computer Science/Information Technology, Engineering (Computer/Telecommunication) or equivalent. Knowledge of Software Development Life Cycle (SDLC), especially and QC and testing phase Ability to use the tools and techniques that is selected by the QC lead/ Manager for the specific software project Reporting capabilities using tools like Microsoft Excel to communicate the status of the testing for the peers and the upper management Eligibility: Candidate should hold engineering background or equivalent graduation preferably. Responsibilities: · Reviewing requirements, specifications and technical design documents to provide timely and meaningful feedback · Creating detailed, comprehensive and well-structured test cases and test scenarios · Estimate, prioritize, plan and coordinate testing activities · Design, develop and execute automation scripts using open source tools · Data driving Test script preparation · Regression Testing, support and reviewing Test scripts · Perform thorough regression testing when bugs are resolved · Develop and apply testing processes for new and existing products to meet client needs · Identify, record, document thoroughly and track bugs · Preparing script execution report · Identify, record, document thoroughly and track bugs · Liaise with internal teams (e.g. developers and product managers) to identify system requirements · Monitor debugging process results · Investigate the causes of non-conforming software and train users to implement solutions · Track quality metrics, like defect densities and open defect counts · Stay up-to-date with new testing tools and test strategies Requirements: · Strong knowledge of software testing methodologies and processes · Good Knowledge on E2E Framework tools like Protractor or BDD Framework tools like Cucumber etc. · Candidate should possess strong knowledge and hands-on in Selenium Suite of Tools like (Selenium IDE, Selenium RC, Selenium Web Driver and Selenium Grid) · Robust Knowledge in Element Locators, Web Driver Methods · Expertize in implementation of test automation framework using Selenium · Should be capable enough to create and execute scripts in Selenium IDE and Selenium Web Driver · Good Knowledge in Exception Handling, File Handling and Parameterization · Strong knowledge in Selenium Web Driver, JUnit, TestNG, Java Programming (Variables, Data Types, Operators, and Flow Controls etc.) · Should possess upright knowledge in OOPs concept · Proficient in designing the test artifacts like Test Cases, Test Scenario and RTM · Experienced in defined Testing process/methodology · Experience with performance and/or security testing is a plus · Candidate should be able to quickly grasp the domain and start delivering results · Should have the attitude to take up any tasks even if it is challenging and deliver it · Should be very flexible in timings and expected to work on weekends and even late hours if required · Proven team player with good analytical thoughts in problems solving and delivering solutions · Proven work experience in software Testing using Automation Tool · Strong knowledge of software testing methodologies and processes · Experience in writing clear, concise and comprehensive test cases and test scenarios · Amiable knowledge of SQL and scripting · Experience in defined Testing process/methodology · Experience with performance and/or security testing is a plus · Candidate should be able to quickly grasp the domain and start delivering results · Should have the attitude to take up any tasks even if it is challenging and deliver it · Should be very flexible in timings and expected to work on weekends and even late hours if required during Critical deliverable Design and Document comprehensive test plans Expert in Functional testing methodologies , Performance testing and Security Testing Good working knowledge on Python, Selenium and Cypress Good understanding of Agile practices Requirements 5+ yrs of experience in Testing Automation Testing Manual Testing , Selenium,
Posted 8 hours ago
4.0 years
0 Lacs
Hyderābād
On-site
Apple is a place where extraordinary people gather to do their best work. Together we craft products and experiences people once couldn’t have imagined - and now can’t imagine living without. If you’re motivated by the idea of making a real impact, and joining a team where we pride ourselves in being one of the most diverse and inclusive companies in the world, a career with Apple might be your dream job. Description We are looking for a Software Engineer to join Apple's Information Systems & Technology (IS&T) team. You will be hands-on throughout the software development lifecycle as we define, design, and build new features to support a rapidly growing business area at Apple. You will be joining a team that manages and operates back-end infrastructure for a portfolio of enterprise systems. We are looking for a talented back-end engineer to join our team in India, where they will contribute to building innovative, scalable solutions that advance our platform roadmap and help shape the future of our enterprise services ecosystem. As a Software Engineer you will need to be involved in following tasks : - Design and develop software components to serve a growing business area - Make technical design decisions that support the long-term health of applications - Implement and standardise strong engineering practices (unit testing, CI) - Provide mentorship and debate practices with other specialists, lead code reviews and meetings - Communicate directly with stakeholder to gain a deep understanding of their problems - Assist support teams through creative problem solving and sharing your expertise - Interact with and present to stakeholders at different levels and locations - Working with team members locally and distributed across locations and timezones RESPONSIBILITIES This role will likely be structured as follows: - 70% Engineering - 15% Operations - 15% Process improvement Minimum Qualifications Bachelor's degree in Computer Science, Engineering, or related field, or equivalent practical experience Minimum of 4 years of proven track record to write high-quality code in at least one programming language, with valued proficiency in any of Python, Java, Go, C++, and the ability and desire to quickly learn new languages Strong understanding of key infrastructure principles (networking, TLS, authentication) that underpin modern backend and frontend application development Strong foundation in computer science fundamentals: object oriented programming, data structures, algorithms, operating systems, and distributed systems concepts Solid understanding of software design principles and patterns for building maintainable systems Understanding of CI/CD pipelines and DevOps practices Experience with monitoring, logging, and observability tools Awareness of security best practices and secure coding principles Experience with Kubernetes Preferred Qualifications Understanding of CI/CD pipelines and DevOps practices Experience with monitoring, logging, and observability tools Awareness of security best practices and secure coding principles Experience with Kubernetes Submit CV
Posted 8 hours ago
3.0 - 4.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Job Title: Data Analyst Location: Gurguram Experience: 3-4 years Employment Type: Full-Time About the Role :We are looking for a skilled and detail-oriented Data Analys t to join our team. The ideal candidate will be responsible for collecting, analyzing, and interpreting large datasets to support data-driven decision-making across the organization. Proficiency in MongoD B and SQ L is essential for this role .Key Responsibilities :Collect, process, and clean structured and unstructured data from various sources .Analyze data using SQL queries and MongoDB aggregations to extract insights .Develop and maintain dashboards, reports, and visualizations to present data in a meaningful way .Collaborate with cross-functional teams to identify business needs and provide data-driven solutions .Monitor data quality and integrity, ensuring accuracy and consistency .Support the development of predictive models and data pipelines .Required Skills & Qualifications :Bachelor's degree in Computer Science, Statistics, Mathematics, or a related field .Proven experience as a Data Analyst or similar role .Strong proficiency in SQ L for data querying and manipulation .Hands-on experience with MongoD B, including working with collections, documents, and aggregations .Knowledge of data visualization tools such as Tableau, Power BI, or similar (optional but preferred) .Strong analytical and problem-solving skills .Excellent communication and stakeholder management abilities .Good to Have :Experience with Python/R for data analysis .Exposure to ETL tools and data warehousing concepts .Understanding of statistical methods and A/B testing .To Apply :Send your updated resume to Anmol.gupta@jobizo.com with the subject line “Application for Data Analyst” .
Posted 8 hours ago
0 years
8 - 9 Lacs
Hyderābād
On-site
About the Role The Community Operations organization at Uber is responsible for delivering world-class customer support to riders, drivers, eaters, and couriers. Members of the Data & Analytics team in Uber's Community Operations organization deliver analytical solutions, such as reports or dashboards, and actionable insights that improve operational efficiency, enable scalable customer support, and help our operational teams drive a world-class customer experience. - What the Candidate Will Do - Conceptualize and build intermediate reports & dashboards Identify trends in data sets & operational weaknesses Support business deep dives on data and insights generation Help measure the impact of new processes Perform business deep dives on data and insights generation Build statistical tools & models to aid decision-making Automate reports & dashboards Support questions from requesters Mentor and train team members - Basic Qualifications - At least 6 months experience or equivalent formal education aligned to statistics, engineering, math, or business Intermediate data analytics, SQL, and web query skills Basic programming skills Strong stakeholder management skills - Preferred Qualifications - Experience with data visualization tools (e.g., Tableau or Power BI) Experience with Google Scripting Experience with Python or R
Posted 8 hours ago
6.0 years
2 - 8 Lacs
Gurgaon
On-site
About Us KlearNow.AI digitizes and contextualizes unstructured trade documents to create shipment visibility, business intelligence, and advanced analytics for supply chain stakeholders. It provides unparalleled transparency and insights, empowering businesses to operate efficiently. We futurize supply chains with AI&ML-powered collaborative digital platforms created from ingesting required trade documentation without the pain of complex integrations. We achieve our goals by assembling a team of the best talents. As we expand, it's crucial to maintain and strengthen our culture, which places a high value on our people and teams. Our collective growth and triumphs are intrinsically linked to the success and well-being of every team member. OUR VISION To futurize global trade, empowering people and optimizing processes with AI-powered clarity. YOUR MISSION As part of a diverse, high-energy workplace, you will challenge the status quo of supply chain operations with your knack for engaging clients and sharing great stories. KlearNow is operational and a certified Customs Business provider in US, Canada, UK, Spain and Netherlands with plans to grow in many more markets in near future. Design, develop, and operate resilient distributed services that run on ECS or Kubernetes to serve hundreds of millions of users around the world Collaborate with various functional teams on expansion of our recommendation systems Influence the roadmap and product development of KlearNow App and services • Recruit, inspire, and develop team members Qualifications B.Tech/MS in Computer Science and Technology with minimum 6 years of hands-on experience in designing, developing, testing, and deploying large scale applications, and microservices in Java, Golang, Python, Node.js, etc. Experience in designing and developing databases using MongoDB, SQL. NoSQL knowledge is a plus. Deep knowledge in one or more of these areas: Cloud platforms (we use AWS), Kubernetes. Experience with distributed caches, such as Elasticsearch, Redis, etc. Experience in Agile Methodologies for Containerization using CI/CD automation tools. An engineer who enjoys writing readable, concise, reusable, extensible codes on day to day basis. Ability to meet deadlines; to get the job done considering close and tight demands from management. A go-getter approach in finding solutions. What your role demands: Take responsibility for the design implementation, delivery, and maintenance of several code libraries and distributed services. Be responsible for building highly scalable, reliable and fault-tolerant systems. Be an active participant in formulating product roadmap and defining the OKRs of the team and the company. Discuss and articulate product requirements continuously with product management. Stay updated on the latest technologies in distributed systems. Capable of understanding and researching new technologies and tools that help building the next generation systems. Preferred Qualifications Experience in design and development using NoSQL, such as DynamoDB, Cassandra or RocksDB Experience with Aerospike, ElasticSearch, Kafka, and Spark Experience with Agile development methodology and CI/CD Experience with Docker containers along with Kubernetes or ECS Join our vibrant and forward-thinking team at KlearNow.ai as we continue to push the boundaries of AI/ML technology. We offer a competitive salary, flexible work arrangements, and ample opportunities for professional growth. We are committed to diversity, equality and inclusion. If you are passionate about shaping the future of logistics and supply chain and making a difference, we invite you to apply .
Posted 8 hours ago
15.0 years
8 - 10 Lacs
Gurgaon
On-site
ROLES & RESPONSIBILITIES We are seeking an experienced and visionary Data Architect with over 15 years of experience to lead the design and implementation of scalable, secure, and high-performing data architectures. The ideal candidate should have a deep understanding of cloud-native architectures, enterprise data platforms, and end-to-end data lifecycle management. You will work closely with business, engineering, and product teams to craft robust data solutions that drive business intelligence, analytics, and AI initiatives. Key Responsibilities: Design and implement enterprise-grade data architectures using cloud platforms (e.g., AWS, Azure, GCP). Lead the definition of data architecture standards, guidelines, and best practices. Architect scalable data solutions including data lakes, data warehouses, and real-time streaming platforms. Collaborate with data engineers, analysts, and data scientists to understand data requirements and deliver optimal solutions. Oversee data modeling activities including conceptual, logical, and physical data models. Ensure data security, privacy, and compliance with applicable regulations (e.g., GDPR, HIPAA). Define and implement data governance strategies in collaboration with stakeholders. Evaluate and recommend data-related tools and technologies. Provide architectural guidance and mentorship to data engineering teams. Participate in client discussions, pre-sales, and proposal building (if in a consulting environment). Required Skills & Qualifications: 15+ years of experience in data architecture, data engineering, or database development. Strong experience architecting data solutions on at least one major cloud platform (AWS, Azure, or GCP). Deep understanding of data management principles, data modeling, ETL/ELT pipelines, and data warehousing. Hands-on experience with modern data platforms and tools (e.g., Snowflake, Databricks, BigQuery, Redshift, Synapse, Apache Spark). Proficiency with programming languages such as Python, SQL, or Java. Familiarity with real-time data processing frameworks like Kafka, Kinesis, or Azure Event Hub. Experience implementing data governance, data cataloging, and data quality frameworks. Knowledge of DevOps practices, CI/CD pipelines for data, and Infrastructure as Code (IaC) is a plus. Excellent problem-solving, communication, and stakeholder management skills. Bachelor's or Master's degree in Computer Science, Information Technology, or a related field. Cloud Architect or Data Architect certification (AWS/Azure/GCP) is a strong plus. Preferred Certifications: AWS Certified Solutions Architect – Professional Microsoft Certified: Azure Solutions Architect Expert Google Cloud Professional Data Engineer TOGAF or equivalent architecture frameworks What We Offer: A collaborative and inclusive work environment Opportunity to work on cutting-edge data and AI projects Flexible work options Competitive compensation and benefits package EXPERIENCE 16-18 Years SKILLS Primary Skill: Data Architecture Sub Skill(s): Data Architecture Additional Skill(s): Data Architecture ABOUT THE COMPANY Infogain is a human-centered digital platform and software engineering company based out of Silicon Valley. We engineer business outcomes for Fortune 500 companies and digital natives in the technology, healthcare, insurance, travel, telecom, and retail & CPG industries using technologies such as cloud, microservices, automation, IoT, and artificial intelligence. We accelerate experience-led transformation in the delivery of digital platforms. Infogain is also a Microsoft (NASDAQ: MSFT) Gold Partner and Azure Expert Managed Services Provider (MSP). Infogain, an Apax Funds portfolio company, has offices in California, Washington, Texas, the UK, the UAE, and Singapore, with delivery centers in Seattle, Houston, Austin, Kraków, Noida, Gurgaon, Mumbai, Pune, and Bengaluru.
Posted 8 hours ago
0 years
4 - 6 Lacs
Gurgaon
On-site
Job Description: We are looking for a highly skilled Engineer with a solid experience of building Bigdata, GCP Cloud based real time data pipelines and REST APIs with Java frameworks. The Engineer will play a crucial role in designing, implementing, and optimizing data solutions to support our organization s data-driven initiatives. This role requires expertise in data engineering, strong problem-solving abilities, and a collaborative mindset to work effectively with various stakeholders. This role will be focused on the delivery of innovative solutions to satisfy the needs of our business. As an agile team we work closely with our business partners to understand what they require, and we strive to continuously improve as a team. Technical Skills 1. Core Data Engineering Skills Proficiency in using GCP s big data tools like BigQuery For data warehousing and SQL analytics. Dataproc: For running Spark and Hadoop clusters. GCP Dataflow For stream and batch data processing.(High level Idea) GCP Pub/Sub: For real-time messaging and event ingestion.(High level Idea) Expertise in building automated, scalable, and reliable pipelines using custom Python/Scala solutions or Cloud Data Functions . 2. Programming and Scripting Strong coding skills in SQL, and Java. Familiarity with APIs and SDKs for GCP services to build custom data solutions. 3. Cloud Infrastructure Understanding of GCP services such as Cloud Storage, Compute Engine, and Cloud Functions. Familiarity with Kubernetes (GKE) and containerization for deploying data pipelines. (Optional but Good to have) 4. DevOps and CI/CD Experience setting up CI/CD pipelines using Cloud Build, GitHub Actions, or other tools. Monitoring and logging tools like Cloud Monitoring and Cloud Logging for production workflows. 5. Backend Development (Spring Boot & Java) Design and develop RESTful APIs and microservices using Spring Boot. Implement business logic, security, authentication (JWT/OAuth), and database operations. Work with relational databases (MySQL, PostgreSQL, MongoDB, Cloud SQL). Optimize backend performance, scalability, and maintainability. Implement unit testing and integration testing Big Data ETL - Datawarehousing GCP Java RESTAPI CI/CD Kubernetes About Virtusa Teamwork, quality of life, professional and personal development: values that Virtusa is proud to embody. When you join us, you join a team of 27,000 people globally that cares about your growth — one that seeks to provide you with exciting projects, opportunities and work with state of the art technologies throughout your career with us. Great minds, great potential: it all comes together at Virtusa. We value collaboration and the team environment of our company, and seek to provide great minds with a dynamic place to nurture new ideas and foster excellence. Virtusa was founded on principles of equal opportunity for all, and so does not discriminate on the basis of race, religion, color, sex, gender identity, sexual orientation, age, non-disqualifying physical or mental disability, national origin, veteran status or any other basis covered by appropriate law. All employment is decided on the basis of qualifications, merit, and business need.
Posted 8 hours ago
4.0 years
4 - 8 Lacs
Gurgaon
On-site
Sr. Manager – Credit Modeling Looking for a Credit Risk Modeler with 4+ years of experience in developing credit risk scorecards (application, behavior, collections, etc.) with strong expertise in Machine Learning Algorithms, Sql query and fluent in PySpark /Python. Proven ability to work proactively and independently to drive results Responsibilities: Develop end to end Credit Risk scorecards ranging from applications to collections scorecard. Process Credit Bureau data as relevant for the specific scorecard at hand. Prepare modeling data using both internal databases and external data. Excellent SQL query skills Monitor models in production. Report model performance and validity statistics. Interact with model governance team on model build and model monitoring. Recalibrate specific models as per need. Develop machine learning models (Random Forest, XG Boost, etc.) using PySpark/Python Deploy machine learning models in production Skills/Experience: At least have 4+ years of core risk modeling experience Excellent knowledge in PySpark, Python, SQL etc A good understanding of Credit Bureau data is required· Must have the ability to work effectively within a team and be flexible to work on multiple projects· Self-starter, with demonstrable ability to work proactively and independently to drive results· Strong analytical and problem-solving skills Academic Qualifications : B. Tech or Master’s degree in economics, statistics, operations research, mathematics, engineering, business, or related field with a strong quantitative emphasis.
Posted 8 hours ago
0 years
4 - 9 Lacs
Gurgaon
On-site
We are seeking a highly skilled Platform Engineer with a strong focus on security to design, implement, and manage secure, scalable, and resilient cloud infrastructure. The ideal candidate should have deep expertise in AWS, Infrastructure as Code (IaC) tools like Terraform and Ansible, and strong working knowledge of Kubernetes. A solid understanding of cloud security platforms such as AWS Security hub, AWS GuardRails, Wiz, Chainguard, and Terraform Sentinel for policy-as-code is essential. This role combines platform engineering with security best practices to ensure cloud infrastructure remains robust and compliant. Roles and Responsibilities Platform Engineering & Automation: Design, implement, and manage scalable and secure infrastructure platforms using Terraform, Ansible, and scripting in Python and Bash. Automate provisioning, monitoring, and scaling operations across cloud environments. Cloud & Kubernetes Operations: Build and manage containerized workloads on Amazon EKS or other Kubernetes platforms. Ensure reliable deployment pipelines and automated rollouts/rollbacks, while maintaining secure container configurations. AWS Engineer - AWS + Security 1 Security Tooling Integration: Integrate cloud security platforms like Wiz and Chainguard into the CI/CD pipelines and Kubernetes ecosystem to detect, prevent, and remediate security risks across infrastructure and workloads. Policy-as-Code & Compliance: Implement Terraform Sentinel policies to enforce security and compliance standards as part of the provisioning workflow. Develop automated controls for access, resource usage, and compliance checks. Infrastructure & Cloud Security: Champion security best practices across the platform. Implement network security (VPC, subnets, NACLs, security groups), IAM policies, secrets management, image scanning, and runtime protection. Monitoring & Observability: Set up and maintain observability tools and dashboards. Ensure systems have high availability, resilience, and meet SLA/SLO requirements, while proactively identifying and resolving anomalies. Collaboration & Enablement: Partner with developers, security teams, and SREs to improve platform usability, enhance developer productivity, and promote secure-by-design architecture principles. Qualifications Strong experience in building and managing AWS-based infrastructure with Terraform and Ansible. Deep hands-on experience with Kubernetes (preferably Amazon EKS). Working knowledge of Wiz, Chainguard, and Terraform Sentinel. Proficiency in Python and Bash for scripting and automation. Strong understanding of cloud security principles, secure networking, and IAM. AWS Engineer - AWS + Security 2 Experience with securing containerized workloads, including image hardening, runtime security, and vulnerability scanning. Proven ability to design resilient, secure, and scalable infrastructure architectures. Bachelor’s degree in computer science, Cybersecurity, or a related field. Relevant certifications (e.g., AWS Certified Security – Specialty, CKA/CKS, HashiCorp Certified Terraform Associate). Familiarity with DevSecOps practices, shift-left security, and secure SDLC. Experience working in Agile and modern CI/CD development environments
Posted 8 hours ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
24052 Jobs | Dublin
Wipro
12710 Jobs | Bengaluru
EY
9024 Jobs | London
Accenture in India
7651 Jobs | Dublin 2
Uplers
7362 Jobs | Ahmedabad
Amazon
7248 Jobs | Seattle,WA
Oracle
6567 Jobs | Redwood City
IBM
6559 Jobs | Armonk
Muthoot FinCorp (MFL)
6161 Jobs | New Delhi
Capgemini
5158 Jobs | Paris,France