Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
0 years
2 - 3 Lacs
Gurgaon
On-site
Role Purpose The Sr Analyst IT Business is responsible for leading business process reviews and provide recommendations to ensure processes, systems and documentation meet changing business conditions and/or requirements. The Sr Analyst IT Business will gather business requirements, perform system and process analysis, design, support testing and prototyping for large cross functional complex projects. Senior IT Analyst will need to lead the design and implementation of automation solutions across business functions, leveraging RPA tools to enhance efficiency and drive stakeholder engagement. Key Accountabilities Collaborate with technology partners to design and deliver automation solutions that leverage RPA, Python, and AI/ML components. Analyze business processes, identify automation opportunities, and create solution architectures that include cognitive and predictive elements. Lead technical design, code reviews, and ensure adherence to automation best practices and enterprise architecture standards. Enhance and optimize existing automations, integrating machine learning models or AI APIs where applicable. Work with AI tools such as OCR engines, language models, and classification/prediction models to build intelligent workflows. Conduct monthly stakeholder meetings to track business demand, pipeline, and discuss potential automation initiatives. Track all work in Rally or equivalent project tracking tool, with regular reporting and documentation. Guide and manage the automation support team, ensuring timely resolution of incidents, root cause analysis, and continuous improvement. Key Skills & Experiences Education – Bachelor's in a relevant field of work or an equivalent combination of education and work related experience. SAFe PO/PM certification preferred SAFe Agilist certification preferred Experience – Strong proficiency in Automation tools and capabilities preferably - BP, UIP, MS PA etc. Advanced Python programming skills, including experience with automation, APIs, and data handling libraries (e.g., pandas, NumPy). Exposure to AI/ML concepts such as model building, classification, regression, clustering, and NLP techniques. Familiarity with machine learning frameworks such is a plus. Experience with intelligent document processing, LLM integration, and cognitive automation is a plus (e.g., chatbots & smart decisioning). Understanding of integration technologies – REST/SOAP APIs, databases, and enterprise platforms. Experience with Microsoft Power Platform tools like Power Apps, Power Automate and Power BI is beneficial. Don't quite meet every single requirement, but still believe you'd be a great fit for the job? We'll never know unless you hit the 'Apply' button. Start your journey with us today.
Posted 3 weeks ago
0 years
10 - 30 Lacs
Sonipat
Remote
Newton School of Technology is on a mission to transform technology education and bridge the employability gap. As India’s first impact university, we are committed to revolutionizing learning, empowering students, and shaping the future of the tech industry. Backed by renowned professionals and industry leaders, we aim to solve the employability challenge and create a lasting impact on society. We are currently looking for a Data Mining Engineer to join our Computer Science Department. This is a full-time academic role focused on data mining, analytics, and teaching/mentoring students in core data science and engineering topics. Key Responsibilities: ● Develop and deliver comprehensive and engaging lectures for the undergraduate "Data Mining", “Big Data”, and “Data Analytics” courses, covering the full syllabus from foundational concepts to advanced techniques. ● Instruct students on the complete data lifecycle, including data preprocessing, cleaning, transformation, and feature engineering. ● Teach the theory, implementation, and evaluation of a wide range of algorithms for Classification, Association rules mining, Clustering, and Anomaly Detections. ● Design and facilitate practical lab sessions and assignments that provide students with hands-on experience using modern data tools and software. ● Develop and grade assessments, including assignments, projects, and examinations, that effectively measure the Course Learning Objectives (CLOs). ● Mentor and guide students on projects, encouraging them to work with real-world or benchmark datasets (e.g., from Kaggle). ● Stay current with the latest advancements, research, and industry trends in data engineering and machine learning to ensure the curriculum remains relevant and cutting-edge. ● Contribute to the academic and research environment of the department and the university. Required Qualifications: ● A Ph.D. (or a Master's degree with significant, relevant industry experience) in Computer Science, Data Science, Artificial Intelligence, or a closely related field. ● Demonstrable expertise in the core concepts of data engineering and machine learning as outlined in the syllabus. ● Strong practical proficiency in Python and its data science ecosystem, specifically Scikit-learn, Pandas, NumPy, and visualization libraries (e.g., Matplotlib, Seaborn). ● Proven experience in teaching, preferably at the undergraduate level, with an ability to make complex topics accessible and engaging. ● Excellent communication and interpersonal skills. Preferred Qualifications: ● A strong record of academic publications in reputable data mining, machine learning, or AI conferences/journals. ● Prior industry experience as a Data Scientist, Big Data Engineer, Machine Learning Engineer, or in a similar role. ● Experience with big data technologies (e.g., Spark, Hadoop) and/or deep learning frameworks (e.g., TensorFlow, PyTorch). ● Experience in mentoring student teams for data science competitions or hackathons. Perks & Benefits: ● Competitive salary packages aligned with industry standards. ● Access to state-of-the-art labs and classroom facilities. ● To know more about us, feel free to explore our website: Newton School of Technology. We look forward to the possibility of having you join our academic team and help shape the future of tech education! Job Type: Full-time Pay: ₹1,000,000.00 - ₹3,000,000.00 per year Benefits: Food provided Health insurance Leave encashment Paid sick time Paid time off Provident Fund Work from home Schedule: Day shift Monday to Friday Supplemental Pay: Performance bonus Quarterly bonus Yearly bonus Application Question(s): Are you interested in a full-time time onsite Instructor role? Are you ready to relocate to Sonipat - NCR Delhi? Are you ready to relocate to Pune? Work Location: In person Expected Start Date: 15/07/2025
Posted 3 weeks ago
0 years
18 Lacs
Gurgaon
On-site
Job Description: Key Responsibilities: Lead the design, implementation, and optimization of MySQL databases on AWS and Azure cloud environments. Manage cloud-native MySQL services such as AWS RDS, Aurora, Azure Database for MySQL. Oversee database security, including user management, encryption, and backup strategies. Develop and implement performance tuning strategies, including query optimization, indexing, and hardware scaling. Design and manage high availability and disaster recovery strategies using replication, clustering, and automated backups. Automate routine DBA tasks using tools like Ansible, Python, or Shell scripting. Monitor MySQL database performance using cloud-native monitoring tools and third-party solutions. Troubleshoot and resolve database-related issues in a timely manner, ensuring high availability and minimal downtime. Lead and mentor a team of junior DBAs, ensuring effective collaboration with development and operations teams. Manage database migrations, upgrades, and capacity planning for future growth. Required Skills & Experience: Proven experience as a MySQL DBA, with a focus on cloud platforms like AWS (RDS, Aurora) and Azure (Azure Database for MySQL). Strong expertise in MySQL performance tuning, query optimization, and index management. Hands-on experience with high availability solutions (replication, clustering) and backup/recovery strategies. Expertise in cloud-native database management and deployment in AWS and Azure environments. Proficient in database automation using scripting languages (Python, Bash, Ansible). Experience with monitoring tools (CloudWatch, Azure Monitor, Percona Monitoring, Nagios). Strong troubleshooting skills and ability to resolve complex database issues quickly. Experience with security management, including access control, encryption, and auditing. Familiarity with database migrations and upgrades in cloud environments. Preferred Qualifications: MySQL certifications or cloud certifications (AWS Certified Database – Specialty, Azure Database certifications). Experience with Infrastructure as Code (Terraform, CloudFormation) for MySQL provisioning. Familiarity with DevOps and CI/CD processes in a database environment. Experience in managing MySQL in containerized environments (Docker, Kubernetes). Job Types: Full-time, Permanent Pay: Up to ₹1,800,000.00 per year Schedule: Day shift Fixed shift Monday to Friday Supplemental Pay: Performance bonus Work Location: In person
Posted 3 weeks ago
0 years
12 Lacs
Gurgaon
On-site
Job Description: Key Responsibilities: Lead the design, implementation, and optimization of MySQL databases on AWS and Azure cloud environments. Manage cloud-native MySQL services such as AWS RDS, Aurora, Azure Database for MySQL. Oversee database security, including user management, encryption, and backup strategies. Develop and implement performance tuning strategies, including query optimization, indexing, and hardware scaling. Design and manage high availability and disaster recovery strategies using replication, clustering, and automated backups. Automate routine DBA tasks using tools like Ansible, Python, or Shell scripting. Monitor MySQL database performance using cloud-native monitoring tools and third-party solutions. Troubleshoot and resolve database-related issues in a timely manner, ensuring high availability and minimal downtime. Lead and mentor a team of junior DBAs, ensuring effective collaboration with development and operations teams. Manage database migrations, upgrades, and capacity planning for future growth. Required Skills & Experience: Proven experience as a MySQL DBA, with a focus on cloud platforms like AWS (RDS, Aurora) and Azure (Azure Database for MySQL). Strong expertise in MySQL performance tuning, query optimization, and index management. Hands-on experience with high availability solutions (replication, clustering) and backup/recovery strategies. Expertise in cloud-native database management and deployment in AWS and Azure environments. Proficient in database automation using scripting languages (Python, Bash, Ansible). Experience with monitoring tools (CloudWatch, Azure Monitor, Percona Monitoring, Nagios). Strong troubleshooting skills and ability to resolve complex database issues quickly. Experience with security management, including access control, encryption, and auditing. Familiarity with database migrations and upgrades in cloud environments. Preferred Qualifications: MySQL certifications or cloud certifications (AWS Certified Database – Specialty, Azure Database certifications). Experience with Infrastructure as Code (Terraform, CloudFormation) for MySQL provisioning. Familiarity with DevOps and CI/CD processes in a database environment. Experience in managing MySQL in containerized environments (Docker, Kubernetes). Job Type: Full-time Pay: From ₹100,000.00 per month Schedule: Monday to Friday Work Location: In person
Posted 3 weeks ago
1.0 - 4.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Title: Bioinformatician Date: 20 Jun 2025 Job Location: Bangalore Pay Grade Year of Experience: Job Title: Bioinformatician Job Location: Bangalore About Syngene: Syngene ( www.syngeneintl.com ) is an innovation-led contract research, development and manufacturing organization offering integrated scientific services from early discovery to commercial supply. At Syngene, safety is at the heart of everything we do personally and professionally. Syngene has placed safety at par with business performance with shared responsibility and accountability, including following safety guidelines, procedures, and SOPs, in letter and spirit Overall adherence to safe practices and procedures of oneself and the teams aligned Contributing to development of procedures, practices and systems that ensures safe operations and compliance to company’s integrity & quality standards Driving a corporate culture that promotes environment, health, and safety (EHS) mindset and operational discipline at the workplace at all times. Ensuring safety of self, teams, and lab/plant by adhering to safety protocols and following environment, health, and safety (EHS) requirements at all times in the workplace. Ensure all assigned mandatory trainings related to data integrity, health, and safety measures are completed on time by all members of the team including self Compliance to Syngene’ s quality standards at all times Hold self and their teams accountable for the achievement of safety goals Govern and Review safety metrics from time to time We are seeking a highly skilled and experienced computational biologist to join our team. The ideal candidate will have a proven track record in multi-omics data analysis. They will be responsible for integrative analyses and contributing to the development of novel computational approaches to uncover biological insights. Experience: 1-4 years Core Purpose of the Role To support data-driven biological research by performing computational analysis of omics data, and generating translational insights through bioinformatics tools and pipelines. Position Responsibilities Conduct comprehensive analyses of multi-omics datasets, including genomics, transcriptomics, proteomics, metabolomics, and epigenomics. Develop computational workflows to integrate various -omics data to generate inference and hypotheses for testing. Conduct differential expression and functional enrichment analyses. Implement and execute data processing workflows and automate the pipelines with best practices for version control, modularization, and documentation. Apply advanced multivariate data analysis techniques, including regression, clustering, and dimensionality reduction, to uncover patterns and relationships in large datasets. Collaborate with researchers, scientists, and other team members to translate computational findings into actionable biological insights. Educational Qualifications Master’s degree in bioinformatics. Mandatory Technical Skills Programming: Proficiency in Python for data analysis, visualization, and pipeline development. Multi-omics analysis: Proven experience in analyzing and integrating multi-omics datasets. Statistics: Knowledge of probability distributions, correlation analysis, and hypothesis testing. Data visualization: Strong understanding of data visualization techniques and tools (e.g., ggplot2, matplotlib, seaborn). Preferred Machine learning: Familiarity with AI/ML concepts Behavioral Skills Excellent communication skills Objective thinking Problem solving Proactivity Syngene Values All employees will consistently demonstrate alignment with our core values Excellence Integrity Professionalism Equal Opportunity Employer It is the policy of Syngene to provide equal employment opportunity (EEO) to all persons regardless of age, color, national origin, citizenship status, physical or mental disability, race, religion, creed, gender, sex, sexual orientation, gender identity and/or expression, genetic information, marital status, status with regard to public assistance, veteran status, or any other characteristic protected by applicable legislation or local law. In addition, Syngene will provide reasonable accommodations for qualified individuals with disabilities.
Posted 3 weeks ago
3.0 years
10 - 12 Lacs
Delhi
On-site
S enior Fullstack AI/ML Engineer Location: Delhi Experience: 3-5 years Mode: On-site About the Role We are seeking a highly skilled Senior AI/ML Engineer to join our dynamic team. The ideal candidate will have extensive experience in designing, building, and deploying machine learning models and AI solutions to solve real-world business challenges. You will collaborate with cross-functional teams to create and integrate AI/ML models into end-to-end applications, ensuring models are accessible through APIs or product interfaces for real-time usage. Responsibilities Lead the design, development, and deployment of machine learning models for various use cases such as recommendation systems, computer vision, natural language processing (NLP), and predictive analytics. Work with large datasets to build, train, and optimize models using techniques such as classification, regression, clustering, and neural networks. Fine-tune pre-trained models and develop custom models based on specific business needs. Collaborate with data engineers to build scalable data pipelines and ensure the smooth integration of models into production. Collaborate with frontend/backend engineers to build AI-driven features into products or platforms. Build proof-of-concept or production-grade AI applications and tools with intuitive UIs or workflows. Ensure scalability and performance of deployed AI solutions within the full application stack. Implement model monitoring and maintenance strategies to ensure performance, accuracy, and continuous improvement of deployed models. Design and implement APIs or services that expose machine learning models to frontend or other systems Internal Utilize cloud platforms (AWS, GCP, Azure) to deploy, manage, and scale AI/ML solutions. Stay up-to-date with the latest advancements in AI/ML research, and apply innovative techniques to improve existing systems. Communicate effectively with stakeholders to understand business requirements and translate them into AI/ML-driven solutions. Document processes, methodologies, and results for future reference and reproducibility. Required Skills & Qualifications Experience: 5+ years of experience in AI/ML engineering roles, with a proven track record of successfully delivering machine learning projects. AI/ML Expertise: Strong knowledge of machine learning algorithms (supervised, unsupervised, reinforcement learning) and AI techniques, including NLP, computer vision, and recommendation systems. Programming Languages: Proficient in Python and relevant ML libraries such as TensorFlow, PyTorch, Scikit-learn, and Keras. Data Manipulation: Experience with data manipulation libraries such as Pandas, NumPy, and SQL for managing and processing large datasets. Model Development: Expertise in building, training, deploying, and fine-tuning machine learning models in production environments. Cloud Platforms: Experience with cloud platforms such as AWS, GCP, or Azure for the deployment and scaling of AI/ML models. MLOps: Knowledge of MLOps practices for model versioning, automation, and monitoring. Data Preprocessing: Proficient in data cleaning, feature engineering, and preparing datasets for model training. Strong experience building and deploying end-to-end AI-powered applications— not just models but full system integration. Hands-on experience with Flask, FastAPI, Django, or similar for building REST APIs for model serving. Understanding of system design and software architecture for integrating AI into production environments. Experience with frontend/backend integration (basic React/Next.js knowledge is a plus). Demonstrated projects where AI models were part of deployed user-facing applications. Internal NLP & Computer Vision: Hands-on experience with natural language processing or computer vision projects. Big Data: Familiarity with big data tools and frameworks (e.g., Apache Spark, Hadoop) is an advantage. Problem-Solving Skills: Strong analytical and problem-solving abilities, with a focus on delivering practical AI/ML solutions. Nice to Have Experience with deep learning architectures (CNNs, RNNs, GANs, etc.) and techniques. Knowledge of deployment strategies for AI models using APIs, Docker, or Kubernetes. Experience building full-stack applications powered by AI (e.g., chatbots, recommendation dashboards, AI assistants, etc.). Experience deploying AI/ML models in real-time environments using API gateways, microservices, or orchestration tools like Docker and Kubernetes. Solid understanding of statistics and probability. Experience working in Agile development environments. What You'll Gain Be part of a forward-thinking team working on cutting-edge AI/ML technologies. Collaborate with a diverse, highly skilled team in a fast-paced environment. Opportunity to work on impactful projects with real-world applications. Competitive salary and career growth opportunities Job Type: Full-time Pay: ₹1,000,000.00 - ₹1,200,000.00 per year Schedule: Day shift Fixed shift Work Location: In person
Posted 3 weeks ago
0 years
0 Lacs
India
Remote
Data Science Intern Company: INLIGHN TECH Location: Remote (100% Virtual) Duration: 3 Months Stipend for Top Interns: ₹15,000 Certificate Provided | Letter of Recommendation | Full-Time Offer Based on Performance About the Company: INLIGHN TECH empowers students and fresh graduates with real-world experience through hands-on, project-driven internships. The Data Science Internship is designed to equip you with the skills required to extract insights, build predictive models, and solve complex problems using data. Role Overview: As a Data Science Intern, you will work on real-world datasets to develop machine learning models, perform data wrangling, and generate actionable insights. This internship will help you strengthen your technical foundation in data science while working on projects that have a tangible business impact. Key Responsibilities: Collect, clean, and preprocess data from various sources Apply statistical methods and machine learning techniques to extract insights Build and evaluate predictive models for classification, regression, or clustering tasks Visualize data using libraries like Matplotlib, Seaborn, or tools like Power BI Document findings and present results to stakeholders in a clear and concise manner Collaborate with team members on data-driven projects and innovations Qualifications: Pursuing or recently completed a degree in Data Science, Computer Science, Mathematics, or a related field Proficiency in Python and data science libraries (NumPy, Pandas, Scikit-learn, etc.) Understanding of statistical analysis and machine learning algorithms Familiarity with SQL and data visualization tools or libraries Strong analytical, problem-solving, and critical thinking skills Eagerness to learn and apply data science techniques to solve real-world problems Internship Benefits: Hands-on experience with real datasets and end-to-end data science projects Certificate of Internship upon successful completion Letter of Recommendation for top performers Build a strong portfolio of data science projects and models
Posted 3 weeks ago
8.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Job Description Senior Specialist, Oncology New Products, Oncology Global Commercial Pipeline Analytics, HHDDA Our Human Health Digital Data and Analytics (HHDDA) team is innovating how we understand our patients and their needs. Working cross functionally we are inventing new ways of engaging, interacting with our customers and patients leveraging digital, data and analytics and measuring the impact. The Senior Specialist, Oncology New Products, Oncology Global Commercial Pipeline Analytics, HHDDA will be responsible for developing and delivering data and analytics, generating strategic insights, and addressing key business questions from the Global Oncology New Products Marketing team to inform current and future pipeline strategies. The team member will partner closely with multiple cross-functional teams, including global marketing, regional marketing, clinical, outcomes research, medical affairs, as well as across the depth of the HHDDA organization. Reporting to Associate Director, Oncology Global Commercial Pipeline Analytics, within HHDDA, this role will lead the development of analytics capabilities for the innovative oncology new products and pipeline priorities, spanning all tumor areas across oncology and hematology. The successful candidate will ’connect the dots’ across HHDDA capability functions like market research, forecasting, payer insights & analytics, data science, data strategy & solutions. Primary Responsibilities Pipeline Analytics & Insights Conduct analytics and synthesize insights enable launch excellence for multiple new assets. Conceptualize and build set of analytics capabilities and tools anchored to our marketing and launch frameworks to support strategic decision- making for Global Oncology portfolio (e.g. market and competitor landscape assessment tools, commercial opportunity assessments, market maps, analytical patient and HCP journeys, benchmark libraries). Analytics Delivery Hands-on analytics project delivery with advanced expertise in data manipulation, analysis, and visualization using tools such as Excel-VBA, SQL, R, Python, PowerBI, ThoughtSpot or similar technologies and capabilities. Leverage a variety of patient modeling techniques including statistical, patient-flow, and simulations-based techniques for insight generation. Benchmarking Analytics Lead benchmarking analytics to collect, analyze, and translate insights into recommended business actions to inform strategic business choices. Stakeholder Collaboration Partner effectively with global marketing teams, HHDDA teams, and other cross-functional teams to inform strategic decisions and increase commercial rigor through all phases of pipeline asset development. Communication and Transparency Provide clear and synthesized communication to global marketing leaders and cross-functional teams, on commercial insights addressing the priority business questions. Required Experience And Skills Bachelor's degree, preferably in a scientific, engineering, or business-related field. Overall experience of 8+ years, with 4+ years of relevant experience in oncology commercialization, advanced analytics, oncology forecasting, insights syndication, clinical development, or related roles within the pharmaceutical or biotechnology industry Therapeutic area experience in Oncology and/or emerging oncology therapies Strong problem-solving abilities, to find and execute solutions to complex or ambiguous business problems. Experience conducting predictive modelling and secondary data analytics on large datasets using relevant skills (e.g., excel VBA, Python, SQL) and understanding of algorithms (such as regressions, decision trees, clustering etc.) Deep understanding of commercial Oncology global data ecosystem e.g., Epidemiology datasets, claims datasets, and real-world datasets Confident leader who takes ownership of responsibilities, is able to work autonomously and hold self and others accountable for delivery of quality output Strategic thinker who is consultative, collaborative and can “engage as equals.” Strong communication skills using effective storytelling grounded on data insights. Relationship-building and influencing skills with an ability to collaborate cross-functionally. Ability to connect dots across sources, and attention to detail Preferred Experience And Skills Experience in diverse healthcare datasets, insights, and analytics Experience in Life Science or consulting industry Advanced degree (e.g., MBA, PharmD, PhD) preferred. Global experience preferred Team management experience Data visualization skills (e.g. PowerBI) Our Human Health Division maintains a “patient first, profits later” ideology. The organization is comprised of sales, marketing, market access, digital analytics and commercial professionals who are passionate about their role in bringing our medicines to our customers worldwide. We are proud to be a company that embraces the value of bringing diverse, talented, and committed people together. The fastest way to breakthrough innovation is when diverse ideas come together in an inclusive environment. We encourage our colleagues to respectfully challenge one another’s thinking and approach problems collectively. We are an equal opportunity employer, committed to fostering an inclusive and diverse workplace. Current Employees apply HERE Current Contingent Workers apply HERE Search Firm Representatives Please Read Carefully Merck & Co., Inc., Rahway, NJ, USA, also known as Merck Sharp & Dohme LLC, Rahway, NJ, USA, does not accept unsolicited assistance from search firms for employment opportunities. All CVs / resumes submitted by search firms to any employee at our company without a valid written search agreement in place for this position will be deemed the sole property of our company. No fee will be paid in the event a candidate is hired by our company as a result of an agency referral where no pre-existing agreement is in place. Where agency agreements are in place, introductions are position specific. Please, no phone calls or emails. Employee Status Regular Relocation VISA Sponsorship Travel Requirements Flexible Work Arrangements Hybrid Shift Valid Driving License Hazardous Material(s) Required Skills Business Intelligence (BI), Database Design, Data Engineering, Data Modeling, Data Science, Data Visualization, Machine Learning, Software Development, Stakeholder Relationship Management, Waterfall Model Preferred Skills Job Posting End Date 06/30/2025 A job posting is effective until 11 59 59PM on the day BEFORE the listed job posting end date. Please ensure you apply to a job posting no later than the day BEFORE the job posting end date. Requisition ID R339603
Posted 3 weeks ago
3.5 years
0 Lacs
Kochi, Kerala, India
On-site
Job Title - + + Management Level Location: Kochi, Coimbatore, Trivandrum Must have skills: Big Data, Python or R Good to have skills: Scala, SQL Job Summary A Data Scientist is expected to be hands-on to deliver end to end vis a vis projects undertaken in the Analytics space. They must have a proven ability to drive business results with their data-based insights. They must be comfortable working with a wide range of stakeholders and functional teams. The right candidate will have a passion for discovering solutions hidden in large data sets and working with stakeholders to improve business outcomes. Responsibilities Roles and Responsibilities Identify valuable data sources and collection processes Supervise preprocessing of structured and unstructured data Analyze large amounts of information to discover trends and patterns for insurance industry. Build predictive models and machine-learning algorithms Combine models through ensemble modeling Present information using data visualization techniques Collaborate with engineering and product development teams Hands-on knowledge of implementing various AI algorithms and best-fit scenarios Has worked on Generative AI based implementations Professional And Technical Skills 3.5-5 years’ experience in Analytics systems/program delivery; at least 2 Big Data or Advanced Analytics project implementation experience Experience using statistical computer languages (R, Python, SQL, Pyspark, etc.) to manipulate data and draw insights from large data sets; familiarity with Scala, Java or C++ Knowledge of a variety of machine learning techniques (clustering, decision tree learning, artificial neural networks, etc.) and their real-world advantages/drawbacks Knowledge of advanced statistical techniques and concepts (regression, properties of distributions, statistical tests and proper usage, etc.) and experience with applications Hands on experience in Azure/AWS analytics platform (3+ years) Experience using variations of Databricks or similar analytical applications in AWS/Azure Experience using business intelligence tools (e.g. Tableau) and data frameworks (e.g. Hadoop) Strong mathematical skills (e.g. statistics, algebra) Excellent communication and presentation skills Deploying data pipelines in production based on Continuous Delivery practices. Additional Information Multi Industry domain experience Expert in Python, Scala, SQL Knowledge of Tableau/Power BI or similar self-service visualization tools Interpersonal and Team skills should be top notch Nice to have leadership experience in the past About Our Company | Accenture
Posted 3 weeks ago
7.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Job Title: AI-ML Engineer Location: Pune Experience: 3–7 Years Job Summary: We are looking for an experienced AI/ML Engineer to join our team and contribute to the development of scalable machine learning systems. You will work across teams to design, implement, and optimize end-to-end ML pipelines in a high-performance, real-time environment. Key Responsibilities: Collaborate with teams in Data Science, Engineering, and Operations to address large-scale ML challenges. Build and manage end-to-end ML pipelines including data processing, training, and deployment. Optimize models for performance, hyperparameter tuning, and training efficiency. Leverage deep learning frameworks and cloud platforms for scalable model deployment. Ensure seamless integration with distributed systems and data ecosystems. Required Skills: 2+ years of Python programming experience. 1+ year in ML methods: classification, clustering, recommendation, optimization, graph mining, or deep learning. 2+ years Experience with distributed frameworks (Hadoop, Spark, Kubernetes). 2+ years Experience with DL tools like TensorFlow, PyTorch, or Keras. 2+ years Experience with Cloud experience (AWS/GCP/Azure). Excellent communication skills. Must Have : Prior AdTech experience (DSP, SSP, Ad Exchange). M.Tech or Ph.D. in Computer Science, Software Engineering, Mathematics, or a related field.
Posted 3 weeks ago
4.0 - 10.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Role: Sr SQL Developer Experience : 4 to 10 Years Location: Hyderabad Work Timings: 1 PM to 10 PM Job Responsibilities: Mandatory Skills: 1. Strong Proficiency with T-SQL. 2. Experience with MS SQL Server relational database. 3. Experience in writing T-SQL queries, Custom Stored Procedures, Indexes, Functions and Triggers as per the client requirements. 4. Good Knowledge and working experience in Performance Tuning. Desired Skills: Good knowledge in SQL Server 2008, 2008 R2, 2012 and 2014 with hands on Migration experience. Good understanding of background process functionality. Create, manage, and maintain tables using appropriate storage settings and create database using the Database Configuration Assistant Backup / Recovery and Performance Tuning. Knowledge of Backup and Recovery options and can carry out Basic recovery under guidance. Good knowledge and hands on experience in tuning the Database at Memory level, able to tweak SQL queries. Good understanding of the SQL Server architecture and can trouble shoot connectivity issues. Should have good administrative knowledge of Windows OS. Should be able to administer and alter security and audit parameters under guidance. Good working knowledge of SQL Server profiler and Perform console and can carry out administrative job. Familiar with using most of the options available in Profiler. Working knowledge on Clustering, Mirroring and Log shipping Knowledge of SQL Server high availability like Clustering, Log shipping, Mirroring and Replication. Please share your updated CV to hiring@paradigmit.com
Posted 3 weeks ago
0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Senior Gen AI Engineer Job Description Brightly Software is seeking an experienced candidate to join our Product team in the role of Gen AI engineer to drive best in class client-facing AI features by creating and delivering insights that advise client decisions tomorrow. Role As a Gen AI Engineer , you will play a critical role in building AI offerings for Brightly. Y ou will partner with our various software Product teams to drive client facing insights to inform smarter decisions faster . This will include the following: Lead the evaluation and selection of foundation models and vector databases based on performance and business needs Design and implement applications powered by generative AI (e.g., LLMs, diffusion models), delivering contextual and actionable insights for clients. Establish best practices and documentation for prompt engineering, model fine-tuning, and evaluation to support cross-domain generative AI use cases. Build, test, and deploy generative AI applications using standard tools and frameworks for model inference, embeddings, vector stores, and orchestration pipelines. Key Responsibilities Guide the design of multi-step RAG, agentic, or tool-augmented workflows Implement governance, safety layers, and responsible AI practices (e.g., guardrails, moderation, auditability) Mentor junior engineers and review GenAI design and implementation plans Drive experimentation, benchmarking, and continuous improvement of GenAI capabilities Collaborate with leadership to align GenAI initiatives with product and business strategy Build and optimize Retrieval-Augmented Generation (RAG) pipelines using vector stores like Pinecone, FAISS, or AWS Opensearch Perform exploratory data analysis (EDA), data cleaning, and feature engineering to prepare data for model building. Design, develop, train, and evaluate machine learning models (e.g., classification, regression, clustering, natural language processing) with strong exerience in predictive and stastical modelling. Implement and deploy machine learning models into production using AWS services, with a strong focus on Amazon SageMaker (e.g., SageMaker Studio, training jobs, inference endpoints, SageMaker Pipelines). Understanding and development of state management workflows using Langraph . Develop GenAI applications using Hugging Face Transformers, LangChain , and Llama related frameworks Engineer and evaluate prompts, including prompt chaining and output quality assessment Apply NLP and transformer model expertise to solve language tasks Deploy GenAI models to cloud platforms (preferably AWS) using Docker and Kubernetes Monitor and optimize model and pipeline performance for scalability and efficiency Communicate techn
Posted 3 weeks ago
2.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Gen AI Engineer Job Description Brightly Software is seeking a high performer to join our Product team in the role of Gen AI engineer to drive best in class client - facing AI features by creating and delivering insights that advise client decisions tomorrow. Role As a Gen AI Engineer , you will play a critical role in building AI offering s for Brightly. Y ou will partner with our various software Product teams to drive client facing insights to inform smarter decisions faster . This will include the following: Design and implement applications powered by generative AI (e.g., LLMs, diffusion models), delivering contextual and actionable insights for clients. Establish best practices and documentation for prompt engineering, model fine-tuning, and evaluation to support cross-domain generative AI use cases. Build, test, and deploy generative AI applications using standard tools and frameworks for model inference, embeddings, vector stores, and orchestration pipelines. Key Responsibilities Build and optimize Retrieval-Augmented Generation (RAG) pipelines using vector stores like Pinecone, FAISS, or AWS OpenSearch D evelop GenAI applications using Hugging Face Transformers, LangChain , and Llama related frameworks Perform exploratory data analysis (EDA), data cleaning, and feature engineering to prepare data for model building. Design, develop, train, and evaluate machine learning models (e.g., classification, regression, clustering, natural language processing) with strong ex erience in predictive and stastical modelling . Implement and deploy machine learning models into production using AWS services, with a strong focus on Amazon SageMaker (e.g., SageMaker Studio, training jobs, inference endpoints, SageMaker Pipelines). Understanding and development of state management workflows using Langraph . Engineer and evaluate prompts, including prompt chaining and output quality assessment Apply NLP and transformer model expertise to solve language tasks Deploy GenAI models to cloud platforms (preferably AWS) using Docker and Kubernetes Monitor and optimize model and pipeline performance for scalability and efficiency Communicate technical concepts clearly to cross-functional and non-technical stakeholders Thrive in a fast-paced, lean environment and contribute to scalable GenAI system design Qualifications Bachelor’s degree is required 2-4 years of experience of total experience with a strong focus on AI and ML and 1+ years in core GenAI Engineer ing Demonstrated expertise in working with large language models (LLMs) and generative AI systems, including both text-based and multimodal models. S trong programming skills in Python, including proficiency with data science libraries such as NumPy, Pandas, Scikit-learn, TensorFlow, and/or PyTorch . Familiarity with MLOps principles and tools for automating and streamlining the ML lifecycle. Experience working with agentic AI . Capable of building Retrieval-Augmented Generation (RAG) pipelines leveraging vector stores like Pinecone, Chroma, or FAISS. St rong programming skills in Python, with experience using leading AI/ML libraries such as Hugging Face Transformers and LangChain . Practical experience in working with vector databases and embedding methodologies for efficient information retrieval. P ossess experience in developing and exposing API endpoints for accessing AI model capabilities using frameworks like FastAPI . Knowledgeable in prompt engineering techniques, including prompt chaining and performance evaluation strategies . Solid grasp of natural language processing (NLP) fundamentals and transformer-based model architectures. Experience in deploying machine learning models to cloud platforms (preferably AWS) and containerized environments using Docker or Kubernetes. Skilled in fine-tuning and assessing open-source models using methods such as LoRA , PEFT, and supervised training. Strong communication skills with the ability to convey complex technical concepts to non-technical stakeholders. Able to operate successfully in a lean, fast-paced organization, and to create a vision and organization that can scale quickly Senior Gen AI Engineer
Posted 3 weeks ago
6.0 years
0 Lacs
Pune, Maharashtra, India
On-site
The Database Administrator (DBA) will be responsible for the performance, integrity, and security of the project's database. This role includes planning, development, and troubleshooting. The DBA will work closely with project managers, developers, and other stakeholders to ensure database systems are optimized and efficient. Key Responsibilities: • Database Design and Implementation: Design and implement new database systems according to project requirements. • Performance Monitoring: Monitor and optimize database performance to ensure quick query response times and efficient operations. • Security: Ensure database security by implementing access controls, encryption, and regular audits. • Backup and Recovery: Develop and maintain a strategy for database backup and recovery to protect data integrity in case of failure. • Troubleshooting: Diagnose and resolve database issues promptly to minimize downtime. • Collaboration: Work with project team members to understand and fulfill database requirements. • Documentation: Maintain comprehensive documentation of database configurations, policies, and procedures. • Working in Dassault’s Apriso MES project will be added advantage. • Development of SSRS reports. Required Qualifications • Education: Bachelor's degree in Computer Science, Information Technology, or related field. • Experience: Proven 6+ years of experience as a DBA in similar projects with a solid understanding of database management systems (DBMS). • Technical Skills: Proficiency in database languages such as SQL, and familiarity with DBMS like Oracle, MySQL. • Analytical Skills: Strong analytical skills to interpret complex data and provide actionable insights. • Problem-Solving: Excellent problem-solving abilities with a proactive approach to issues. • Communication: Strong communication skills to collaborate effectively with team members and stakeholders. Preferred Qualifications • Certification: DBA certifications from recognized institutions (e.g., Oracle Certified DBA, Microsoft SQL Server Certification). • Advanced Skills: Knowledge in advanced database management tasks such as replication, clustering, and partitioning.
Posted 3 weeks ago
0.0 - 5.0 years
10 - 12 Lacs
Delhi, Delhi
On-site
S enior Fullstack AI/ML Engineer Location: Delhi Experience: 3-5 years Mode: On-site About the Role We are seeking a highly skilled Senior AI/ML Engineer to join our dynamic team. The ideal candidate will have extensive experience in designing, building, and deploying machine learning models and AI solutions to solve real-world business challenges. You will collaborate with cross-functional teams to create and integrate AI/ML models into end-to-end applications, ensuring models are accessible through APIs or product interfaces for real-time usage. Responsibilities Lead the design, development, and deployment of machine learning models for various use cases such as recommendation systems, computer vision, natural language processing (NLP), and predictive analytics. Work with large datasets to build, train, and optimize models using techniques such as classification, regression, clustering, and neural networks. Fine-tune pre-trained models and develop custom models based on specific business needs. Collaborate with data engineers to build scalable data pipelines and ensure the smooth integration of models into production. Collaborate with frontend/backend engineers to build AI-driven features into products or platforms. Build proof-of-concept or production-grade AI applications and tools with intuitive UIs or workflows. Ensure scalability and performance of deployed AI solutions within the full application stack. Implement model monitoring and maintenance strategies to ensure performance, accuracy, and continuous improvement of deployed models. Design and implement APIs or services that expose machine learning models to frontend or other systems Internal Utilize cloud platforms (AWS, GCP, Azure) to deploy, manage, and scale AI/ML solutions. Stay up-to-date with the latest advancements in AI/ML research, and apply innovative techniques to improve existing systems. Communicate effectively with stakeholders to understand business requirements and translate them into AI/ML-driven solutions. Document processes, methodologies, and results for future reference and reproducibility. Required Skills & Qualifications Experience: 5+ years of experience in AI/ML engineering roles, with a proven track record of successfully delivering machine learning projects. AI/ML Expertise: Strong knowledge of machine learning algorithms (supervised, unsupervised, reinforcement learning) and AI techniques, including NLP, computer vision, and recommendation systems. Programming Languages: Proficient in Python and relevant ML libraries such as TensorFlow, PyTorch, Scikit-learn, and Keras. Data Manipulation: Experience with data manipulation libraries such as Pandas, NumPy, and SQL for managing and processing large datasets. Model Development: Expertise in building, training, deploying, and fine-tuning machine learning models in production environments. Cloud Platforms: Experience with cloud platforms such as AWS, GCP, or Azure for the deployment and scaling of AI/ML models. MLOps: Knowledge of MLOps practices for model versioning, automation, and monitoring. Data Preprocessing: Proficient in data cleaning, feature engineering, and preparing datasets for model training. Strong experience building and deploying end-to-end AI-powered applications— not just models but full system integration. Hands-on experience with Flask, FastAPI, Django, or similar for building REST APIs for model serving. Understanding of system design and software architecture for integrating AI into production environments. Experience with frontend/backend integration (basic React/Next.js knowledge is a plus). Demonstrated projects where AI models were part of deployed user-facing applications. Internal NLP & Computer Vision: Hands-on experience with natural language processing or computer vision projects. Big Data: Familiarity with big data tools and frameworks (e.g., Apache Spark, Hadoop) is an advantage. Problem-Solving Skills: Strong analytical and problem-solving abilities, with a focus on delivering practical AI/ML solutions. Nice to Have Experience with deep learning architectures (CNNs, RNNs, GANs, etc.) and techniques. Knowledge of deployment strategies for AI models using APIs, Docker, or Kubernetes. Experience building full-stack applications powered by AI (e.g., chatbots, recommendation dashboards, AI assistants, etc.). Experience deploying AI/ML models in real-time environments using API gateways, microservices, or orchestration tools like Docker and Kubernetes. Solid understanding of statistics and probability. Experience working in Agile development environments. What You'll Gain Be part of a forward-thinking team working on cutting-edge AI/ML technologies. Collaborate with a diverse, highly skilled team in a fast-paced environment. Opportunity to work on impactful projects with real-world applications. Competitive salary and career growth opportunities Job Type: Full-time Pay: ₹1,000,000.00 - ₹1,200,000.00 per year Schedule: Day shift Fixed shift Work Location: In person
Posted 3 weeks ago
2.0 years
0 Lacs
India
Remote
YMT is not just a workplace; it's an innovation hub where cutting-edge ideas transform the advertising world. YMT has been honored with the prestigious Mobile Marketing Association Award for Best Technology Provider in Indonesia for both 2022 and 2023. Our excellence doesn't stop there - we're also recognized among the top 3 technology providers across the Asia-Pacific region. Since our inception in 2019, YMT has been a name synonymous with growth and collaboration. We've joined forces with top-tier advertisers like Unilever, Nestle, Amazon, BliBli, Lazada, and Samsung, cementing our position as a leader in the SEA region. With offices in Jakarta, Singapore, Cambodia, Ho Chi Minh, and New Delhi, we embody a dynamic and multicultural spirit, reflecting the diverse talents that fuel our innovation. YMT is looking for a skilled Data Scientist & Analyst to join our innovative team. In this role, you will be responsible for analyzing complex data sets to provide actionable insights that drive business decisions and improve our programmatic advertising strategies. Your expertise in data analysis, statistical modeling, and visualization will help us understand customer behavior, optimize campaigns, and enhance our overall performance in the competitive digital marketing landscape. Responsibilities Build and maintain customer segmentation models (e.g., clustering, RFM analysis, behavioral cohorts) Design and deploy predictive models (e.g., churn prediction, coupon redemption, purchase frequency) Develop and maintain dashboards to track key business metrics and segment performance Conduct exploratory data analysis to identify trends, patterns, and opportunities Present data-driven insights and recommendations to business and marketing stakeholders Continuously refine models and reporting frameworks to enhance accuracy and business impact Support ad-hoc analysis and reporting requests from various departments Requirements Bachelor's degree in Data Science, Statistics, Mathematics, or a related field; Master's degree is a plus 2+ years of experience in data analysis or data science roles Proficient in programming languages such as Python, R,SQL Solid understanding of machine learning techniques, including clustering, classification, and regression Experience with data visualization tools such as Tableau, Power BI, or Looker Studio Solid understanding of data management principles and database structures Excellent problem-solving skills and attention to detail Strong communication skills and the ability to present complex information clearly Ability to work independently and as part of a team in a fast-paced environment Passion for leveraging data to drive business success and improve user experiences Experience in retail, e-commerce, or FMCG analytics Familiarity with cloud platforms such as AWS, GCP Exposure to marketing analytics, campaign measurement, or attribution modeling Benefits Work From Home Flexible work arrangements Professional development Collaborative and inclusive culture Generous leave policy
Posted 3 weeks ago
0 years
0 Lacs
Kolkata, West Bengal, India
On-site
• Monitor and maintain Citrix infrastructure including Citrix Virtual Apps and Desktops (CVAD), Citrix StoreFront, Citrix Director, and Citrix Workspace. Experience with Citrix ADC (NetScaler), Profile management tools (UPM, FSLogix) • Troubleshoot and resolve issues related to: • User login and session performance • Application publishing • Profile management (e.g., Citrix UPM, FSLogix) • Perform routine maintenance tasks such as: • Updating Citrix policies • Managing machine catalogs and delivery groups • Monitoring server health and resource usage • Assist in patching and updating Citrix components. • Experience with windows server infrastructure (2016/2019/2022). • Troubleshoot issues related to Windows, Active Directory, DNS, DHCP, Group Policy. • Manage and optimize server performance, patching, and security configurations. • Automate administrative tasks using PowerShell and other scripting tools. • Perform system upgrades, migrations, and capacity planning. • Implement and manage high availability and disaster recovery solutions (e.g., clustering, DFS, backup). • Maintain documentation for system configurations, procedures, and troubleshooting. • Participate in audits, compliance checks, and DR testing. • Mentor L1/L2 support teams and lead technical initiatives. • Managing and Maintaining Citrix environments which includes tasks like configuring and monitoring Citrix systems, ensuring they meet high availability and disaster recovery requirements. • Troubleshooting and Problem-Solving skills for diagnosing and resolving complex technical issues within the virtualized environment. • Collaborating with cross-functional teams to ensure seamless integration of Citrix solutions within the broader IT landscape. • Maintain documentation for Citrix architecture, configurations, and procedures. • Participate in disaster recovery planning and testing. • Mentor L1/L2 Citrix & windows supports staff and provides technical guidance. • Provide close liaison with project teams to ensure the smooth transition of new applications, systems and initiatives into the production environment. • Review and recommend options to improve the effectiveness of the global Windows & Citrix infrastructure; research/plan/execute migration.
Posted 3 weeks ago
4.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Job Title: Data Scientist – AIML, GenAI & Agentic AI Location: Pune/ Bangalore/ Indore/ Kolkata Job Type: Full-time Experience Level: 4+ Years NP : Immediate Joiner OR15 Days Max Job Description We are seeking a highly skilled and innovative Data Scientist / AI Engineer with deep expertise in AI/ML, Generative AI, and Agentic AI frameworks to join our advanced analytics and AI team. The ideal candidate will possess a robust background in data science and machine learning, along with hands-on experience in building and deploying end-to-end intelligent systems using modern AI technologies including RAG (Retrieval-Augmented Generation), LLMs, and agent orchestration tools. Key Responsibilities Design, build, and deploy machine learning models and Generative AI solutions for a wide range of use cases (text, vision, and tabular data). Develop and maintain AI/ML pipelines for large-scale training and inference in production environments. Leverage frameworks such as LangChain, LangGraph, CrewAI for building Agentic AI workflows. Fine-tune and prompt-engineer LLMs (e.g., GPT, BERT) for enterprise-grade RAG and NLP solutions. Collaborate with business and engineering teams to translate business problems into AI/ML models that deliver measurable value. Apply advanced analytics techniques such as regression, classification, clustering, sequence modeling, association rules, computer vision, and NLP. Architect and implement scalable AI solutions using Python , PyTorch , TensorFlow , and cloud-native technologies. Ensure integration of AI solutions within existing enterprise architecture using containerized services and orchestration (e.g., Docker, Kubernetes). Maintain documentation and present insights and technical findings to stakeholders. Required Skills and Qualifications Bachelor’s/Master’s/PhD in Computer Science, Data Science, Statistics, or related field. Strong proficiency in Python and libraries such as Pandas, NumPy, Scikit-learn, etc. Extensive experience with deep learning frameworks : PyTorch and TensorFlow. Proven experience with Generative AI , LLMs , RAG , BERT , and related architectures. Familiarity with LangChain , LangGraph , and CrewAI and strong knowledge of agent orchestration and autonomous workflows. Experience with large-scale ML pipelines , MLOps practices, and cloud platforms (AWS, GCP, or Azure). Deep understanding of software engineering principles , design patterns, and enterprise architecture. Strong problem-solving, analytical thinking, and debugging skills. Excellent communication, presentation, and cross-functional collaboration abilities. Preferred Qualifications Experience in fine-tuning LLMs and optimizing prompt engineering techniques. Publications, open-source contributions, or patents in AI/ML/NLP/GenAI. Experience with vector databases and tools such as Pinecone, FAISS, Weaviate, or Milvus. Why Join Us? Work on cutting-edge AI/ML and GenAI innovations. Collaborate with top-tier scientists, engineers, and product teams. Opportunity to shape the next generation of intelligent agents and enterprise AI solutions. Flexible work arrangements and continuous learning culture. To Apply: Please submit your resume and portfolio of relevant AI/ML work (e.g., GitHub, papers, demos) to Shanti.upase@calsoftinc.com
Posted 3 weeks ago
0.0 years
0 Lacs
Mumbai, Maharashtra
On-site
Senior Associate/ Assistant Manager - Data Scientist Location Mumbai, Maharashtra, India Date posted June 20, 2025 Job ID 18618 Our Opening and Your Responsibilities Part of the Global Finance Carbonation team (Operational excellence in Finance for processes and system with a data driven approach) Work closely with Team to improve their processes and systems (using diagnostic analytics, data mining, data analytics, process mining) Work on data analytics, predictive analytics and AI/ML projects as defined. What You Need to Succeed Education Qualification B.E., B.Tech., BSc. (Comps/IT), MSc. (Comps/IT) Technical skills Strong Proficiency in Python and experience with Classical Machine Learning Techniques Ability to Collect, clean, and preprocess large datasets to ensure high-quality data for modeling. Design, develop, and implement machine learning models using traditional techniques such as regression, decision trees, clustering, ensemble methods, and time series forecasting. Identify and create meaningful features to improve model accuracy and performance. Perform time series analysis and forecasting to predict future trends and patterns. Assess and validate models using appropriate metrics and statistical techniques to ensure robustness and reliability. Translate model outputs into actionable business insights and communicate findings to stakeholders. Work closely with cross-functional teams, including data engineers, analysts, and business leaders, to drive data-driven decision-making. Maintain comprehensive documentation of models, methodologies, and processes. Good analytical and problem-solving skills Predictive analysis using different algorithms. Our Offer to You "One Team" that thrives on collaboration and innovation. Opportunities to work with Global teams. An open, fair and inclusive environment Multitude of learning and growth opportunities Medical insurance for you & your family, with access to Telemedicine application A brand name that is identified worldwide with precision, quality, and innovation. About Mettler Toledo METTLER TOLEDO is a global leader in precision instruments and services. We are renowned for innovation and quality across laboratory, process analytics, industrial, product inspection, and retailing applications. Our sales and service network is one of the most extensive in the industry. Our products are sold in more than 140 countries, and we have a direct presence in approximately 40 countries. For more information, please visit www.mt.com. Equal Opportunity Employment We promote equal opportunity worldwide and value diversity in our teams in terms of business background, area of expertise, gender and ethnicity. For more information on our commitment to Sustainability, Diversity and Equal Opportunity please visit us here.
Posted 3 weeks ago
2.0 - 3.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Summary Position Summary AI & Data In this age of disruption, organizations need to navigate the future with confidence, embracing decision making with clear, data-driven choices that deliver enterprise value in a dynamic business environment. The AI & Data team leverages the power of data, analytics, robotics, science and cognitive technologies to uncover hidden relationships from vast troves of data, generate insights, and inform decision-making. The offering portfolio helps clients transform their business by architecting organizational intelligence programs and differentiated strategies to win in their chosen markets. AI & Data will work with our clients to: Implement large-scale data ecosystems including data management, governance and the integration of structured and unstructured data to generate insights leveraging cloud-based platforms Leverage automation, cognitive and science-based techniques to manage data, predict scenarios and prescribe actions Drive operational efficiency by maintaining their data ecosystems, sourcing analytics expertise and providing As-a-Service offerings for continuous insights and improvements AWS Analyst The position is suited for individuals who have the ability to work in a constantly challenging environment and deliver effectively and efficiently. The individual will need to be adaptive and able to react quickly to changing business needs. Work you’ll do Planning, designing and developing cloud-based applications Work in tandem with engineering team to identify and implement the most optimal cloud-based solutions Design and deploy enterprise-wide scalable operations on Cloud Platforms Deploy and debug cloud applications in accordance with best practices throughout the development lifecycle Provides administration for cloud deployments and assures the environments are appropriately configured and maintained. Monitors the environment stability and responds to any issues or service requests for the environment. Educate teams on the implementation of new cloud-based initiatives, providing associated training as required Exceptional problem-solving skills, with the ability to see and solve issues Building and designing web services in the cloud, along with implementing the set-up of geographically redundant services. Orchestrating and automating cloud-based platforms Continuously monitor the system effectiveness and performance and identify the areas for improvement, collaborating with key stakeholders Provide guidance and coaching to the team members as required and also contribute to documenting cloud operations playbook and providing thought leadership in development automation, CI/CD Involve in providing insights for optimization of cloud computing costs Required : 2-3 Years of technology Consulting experience A minimum of 2 Years of experience in Cloud Operations High degree of knowledge using AWS services like lambda, GLUE, S3, Redshift, SNS, SQS and more. Strong scripting experience with python and ability to write SQL queries and string analytical skills. Experience working on CICD/DevOps is nice to have. Proven experience with agile/iterative methodologies implementing Cloud projects. Ability to translate business requirements and technical requirements into technical design. Good knowledge of end to end project delivery methodology implementing Cloud projects. Strong UNIX operating system concepts and shell scripting knowledge Good knowledge of cloud computing technologies and current computing trends. Effective communication skills (written and verbal) to properly articulate complicated cloud reports to management and other IT development partners. Ability to operate independently with clear focus on schedule and outcomes. Experience with algorithm development, including statistical and probabilistic analysis, clustering, recommendation systems, natural language processing, and performance analysis. Our purpose Deloitte’s purpose is to make an impact that matters for our people, clients, and communities. At Deloitte, purpose is synonymous with how we work every day. It defines who we are. Our purpose comes through in our work with clients that enables impact and value in their organizations, as well as through our own investments, commitments, and actions across areas that help drive positive outcomes for our communities. Our people and culture Our inclusive culture empowers our people to be who they are, contribute their unique perspectives, and make a difference individually and collectively. It enables us to leverage different ideas and perspectives, and bring more creativity and innovation to help solve our clients' most complex challenges. This makes Deloitte one of the most rewarding places to work. Professional development At Deloitte, professionals have the opportunity to work with some of the best and discover what works best for them. Here, we prioritize professional growth, offering diverse learning and networking opportunities to help accelerate careers and enhance leadership skills. Our state-of-the-art DU: The Leadership Center in India, located in Hyderabad, represents a tangible symbol of our commitment to the holistic growth and development of our people. Explore DU: The Leadership Center in India . Benefits To Help You Thrive At Deloitte, we know that great people make a great organization. Our comprehensive rewards program helps us deliver a distinctly Deloitte experience that helps that empowers our professionals to thrive mentally, physically, and financially—and live their purpose. To support our professionals and their loved ones, we offer a broad range of benefits. Eligibility requirements may be based on role, tenure, type of employment and/ or other criteria. Learn more about what working at Deloitte can mean for you. Recruiting tips From developing a stand out resume to putting your best foot forward in the interview, we want you to feel prepared and confident as you explore opportunities at Deloitte. Check out recruiting tips from Deloitte recruiters. Requisition code: 303777
Posted 3 weeks ago
2.0 - 3.0 years
0 Lacs
Mumbai, Maharashtra, India
On-site
Summary Position Summary AI & Data In this age of disruption, organizations need to navigate the future with confidence, embracing decision making with clear, data-driven choices that deliver enterprise value in a dynamic business environment. The AI & Data team leverages the power of data, analytics, robotics, science and cognitive technologies to uncover hidden relationships from vast troves of data, generate insights, and inform decision-making. The offering portfolio helps clients transform their business by architecting organizational intelligence programs and differentiated strategies to win in their chosen markets. AI & Data will work with our clients to: Implement large-scale data ecosystems including data management, governance and the integration of structured and unstructured data to generate insights leveraging cloud-based platforms Leverage automation, cognitive and science-based techniques to manage data, predict scenarios and prescribe actions Drive operational efficiency by maintaining their data ecosystems, sourcing analytics expertise and providing As-a-Service offerings for continuous insights and improvements AWS Analyst The position is suited for individuals who have the ability to work in a constantly challenging environment and deliver effectively and efficiently. The individual will need to be adaptive and able to react quickly to changing business needs. Work you’ll do Planning, designing and developing cloud-based applications Work in tandem with engineering team to identify and implement the most optimal cloud-based solutions Design and deploy enterprise-wide scalable operations on Cloud Platforms Deploy and debug cloud applications in accordance with best practices throughout the development lifecycle Provides administration for cloud deployments and assures the environments are appropriately configured and maintained. Monitors the environment stability and responds to any issues or service requests for the environment. Educate teams on the implementation of new cloud-based initiatives, providing associated training as required Exceptional problem-solving skills, with the ability to see and solve issues Building and designing web services in the cloud, along with implementing the set-up of geographically redundant services. Orchestrating and automating cloud-based platforms Continuously monitor the system effectiveness and performance and identify the areas for improvement, collaborating with key stakeholders Provide guidance and coaching to the team members as required and also contribute to documenting cloud operations playbook and providing thought leadership in development automation, CI/CD Involve in providing insights for optimization of cloud computing costs Required : 2-3 Years of technology Consulting experience A minimum of 2 Years of experience in Cloud Operations High degree of knowledge using AWS services like lambda, GLUE, S3, Redshift, SNS, SQS and more. Strong scripting experience with python and ability to write SQL queries and string analytical skills. Experience working on CICD/DevOps is nice to have. Proven experience with agile/iterative methodologies implementing Cloud projects. Ability to translate business requirements and technical requirements into technical design. Good knowledge of end to end project delivery methodology implementing Cloud projects. Strong UNIX operating system concepts and shell scripting knowledge Good knowledge of cloud computing technologies and current computing trends. Effective communication skills (written and verbal) to properly articulate complicated cloud reports to management and other IT development partners. Ability to operate independently with clear focus on schedule and outcomes. Experience with algorithm development, including statistical and probabilistic analysis, clustering, recommendation systems, natural language processing, and performance analysis. Our purpose Deloitte’s purpose is to make an impact that matters for our people, clients, and communities. At Deloitte, purpose is synonymous with how we work every day. It defines who we are. Our purpose comes through in our work with clients that enables impact and value in their organizations, as well as through our own investments, commitments, and actions across areas that help drive positive outcomes for our communities. Our people and culture Our inclusive culture empowers our people to be who they are, contribute their unique perspectives, and make a difference individually and collectively. It enables us to leverage different ideas and perspectives, and bring more creativity and innovation to help solve our clients' most complex challenges. This makes Deloitte one of the most rewarding places to work. Professional development At Deloitte, professionals have the opportunity to work with some of the best and discover what works best for them. Here, we prioritize professional growth, offering diverse learning and networking opportunities to help accelerate careers and enhance leadership skills. Our state-of-the-art DU: The Leadership Center in India, located in Hyderabad, represents a tangible symbol of our commitment to the holistic growth and development of our people. Explore DU: The Leadership Center in India . Benefits To Help You Thrive At Deloitte, we know that great people make a great organization. Our comprehensive rewards program helps us deliver a distinctly Deloitte experience that helps that empowers our professionals to thrive mentally, physically, and financially—and live their purpose. To support our professionals and their loved ones, we offer a broad range of benefits. Eligibility requirements may be based on role, tenure, type of employment and/ or other criteria. Learn more about what working at Deloitte can mean for you. Recruiting tips From developing a stand out resume to putting your best foot forward in the interview, we want you to feel prepared and confident as you explore opportunities at Deloitte. Check out recruiting tips from Deloitte recruiters. Requisition code: 303777
Posted 3 weeks ago
2.0 - 3.0 years
0 Lacs
Greater Kolkata Area
On-site
Summary Position Summary AI & Data In this age of disruption, organizations need to navigate the future with confidence, embracing decision making with clear, data-driven choices that deliver enterprise value in a dynamic business environment. The AI & Data team leverages the power of data, analytics, robotics, science and cognitive technologies to uncover hidden relationships from vast troves of data, generate insights, and inform decision-making. The offering portfolio helps clients transform their business by architecting organizational intelligence programs and differentiated strategies to win in their chosen markets. AI & Data will work with our clients to: Implement large-scale data ecosystems including data management, governance and the integration of structured and unstructured data to generate insights leveraging cloud-based platforms Leverage automation, cognitive and science-based techniques to manage data, predict scenarios and prescribe actions Drive operational efficiency by maintaining their data ecosystems, sourcing analytics expertise and providing As-a-Service offerings for continuous insights and improvements AWS Analyst The position is suited for individuals who have the ability to work in a constantly challenging environment and deliver effectively and efficiently. The individual will need to be adaptive and able to react quickly to changing business needs. Work you’ll do Planning, designing and developing cloud-based applications Work in tandem with engineering team to identify and implement the most optimal cloud-based solutions Design and deploy enterprise-wide scalable operations on Cloud Platforms Deploy and debug cloud applications in accordance with best practices throughout the development lifecycle Provides administration for cloud deployments and assures the environments are appropriately configured and maintained. Monitors the environment stability and responds to any issues or service requests for the environment. Educate teams on the implementation of new cloud-based initiatives, providing associated training as required Exceptional problem-solving skills, with the ability to see and solve issues Building and designing web services in the cloud, along with implementing the set-up of geographically redundant services. Orchestrating and automating cloud-based platforms Continuously monitor the system effectiveness and performance and identify the areas for improvement, collaborating with key stakeholders Provide guidance and coaching to the team members as required and also contribute to documenting cloud operations playbook and providing thought leadership in development automation, CI/CD Involve in providing insights for optimization of cloud computing costs Required : 2-3 Years of technology Consulting experience A minimum of 2 Years of experience in Cloud Operations High degree of knowledge using AWS services like lambda, GLUE, S3, Redshift, SNS, SQS and more. Strong scripting experience with python and ability to write SQL queries and string analytical skills. Experience working on CICD/DevOps is nice to have. Proven experience with agile/iterative methodologies implementing Cloud projects. Ability to translate business requirements and technical requirements into technical design. Good knowledge of end to end project delivery methodology implementing Cloud projects. Strong UNIX operating system concepts and shell scripting knowledge Good knowledge of cloud computing technologies and current computing trends. Effective communication skills (written and verbal) to properly articulate complicated cloud reports to management and other IT development partners. Ability to operate independently with clear focus on schedule and outcomes. Experience with algorithm development, including statistical and probabilistic analysis, clustering, recommendation systems, natural language processing, and performance analysis. Our purpose Deloitte’s purpose is to make an impact that matters for our people, clients, and communities. At Deloitte, purpose is synonymous with how we work every day. It defines who we are. Our purpose comes through in our work with clients that enables impact and value in their organizations, as well as through our own investments, commitments, and actions across areas that help drive positive outcomes for our communities. Our people and culture Our inclusive culture empowers our people to be who they are, contribute their unique perspectives, and make a difference individually and collectively. It enables us to leverage different ideas and perspectives, and bring more creativity and innovation to help solve our clients' most complex challenges. This makes Deloitte one of the most rewarding places to work. Professional development At Deloitte, professionals have the opportunity to work with some of the best and discover what works best for them. Here, we prioritize professional growth, offering diverse learning and networking opportunities to help accelerate careers and enhance leadership skills. Our state-of-the-art DU: The Leadership Center in India, located in Hyderabad, represents a tangible symbol of our commitment to the holistic growth and development of our people. Explore DU: The Leadership Center in India . Benefits To Help You Thrive At Deloitte, we know that great people make a great organization. Our comprehensive rewards program helps us deliver a distinctly Deloitte experience that helps that empowers our professionals to thrive mentally, physically, and financially—and live their purpose. To support our professionals and their loved ones, we offer a broad range of benefits. Eligibility requirements may be based on role, tenure, type of employment and/ or other criteria. Learn more about what working at Deloitte can mean for you. Recruiting tips From developing a stand out resume to putting your best foot forward in the interview, we want you to feel prepared and confident as you explore opportunities at Deloitte. Check out recruiting tips from Deloitte recruiters. Requisition code: 303777
Posted 3 weeks ago
2.0 - 3.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Summary Position Summary AI & Data In this age of disruption, organizations need to navigate the future with confidence, embracing decision making with clear, data-driven choices that deliver enterprise value in a dynamic business environment. The AI & Data team leverages the power of data, analytics, robotics, science and cognitive technologies to uncover hidden relationships from vast troves of data, generate insights, and inform decision-making. The offering portfolio helps clients transform their business by architecting organizational intelligence programs and differentiated strategies to win in their chosen markets. AI & Data will work with our clients to: Implement large-scale data ecosystems including data management, governance and the integration of structured and unstructured data to generate insights leveraging cloud-based platforms Leverage automation, cognitive and science-based techniques to manage data, predict scenarios and prescribe actions Drive operational efficiency by maintaining their data ecosystems, sourcing analytics expertise and providing As-a-Service offerings for continuous insights and improvements AWS Analyst The position is suited for individuals who have the ability to work in a constantly challenging environment and deliver effectively and efficiently. The individual will need to be adaptive and able to react quickly to changing business needs. Work you’ll do Planning, designing and developing cloud-based applications Work in tandem with engineering team to identify and implement the most optimal cloud-based solutions Design and deploy enterprise-wide scalable operations on Cloud Platforms Deploy and debug cloud applications in accordance with best practices throughout the development lifecycle Provides administration for cloud deployments and assures the environments are appropriately configured and maintained. Monitors the environment stability and responds to any issues or service requests for the environment. Educate teams on the implementation of new cloud-based initiatives, providing associated training as required Exceptional problem-solving skills, with the ability to see and solve issues Building and designing web services in the cloud, along with implementing the set-up of geographically redundant services. Orchestrating and automating cloud-based platforms Continuously monitor the system effectiveness and performance and identify the areas for improvement, collaborating with key stakeholders Provide guidance and coaching to the team members as required and also contribute to documenting cloud operations playbook and providing thought leadership in development automation, CI/CD Involve in providing insights for optimization of cloud computing costs Required : 2-3 Years of technology Consulting experience A minimum of 2 Years of experience in Cloud Operations High degree of knowledge using AWS services like lambda, GLUE, S3, Redshift, SNS, SQS and more. Strong scripting experience with python and ability to write SQL queries and string analytical skills. Experience working on CICD/DevOps is nice to have. Proven experience with agile/iterative methodologies implementing Cloud projects. Ability to translate business requirements and technical requirements into technical design. Good knowledge of end to end project delivery methodology implementing Cloud projects. Strong UNIX operating system concepts and shell scripting knowledge Good knowledge of cloud computing technologies and current computing trends. Effective communication skills (written and verbal) to properly articulate complicated cloud reports to management and other IT development partners. Ability to operate independently with clear focus on schedule and outcomes. Experience with algorithm development, including statistical and probabilistic analysis, clustering, recommendation systems, natural language processing, and performance analysis. Our purpose Deloitte’s purpose is to make an impact that matters for our people, clients, and communities. At Deloitte, purpose is synonymous with how we work every day. It defines who we are. Our purpose comes through in our work with clients that enables impact and value in their organizations, as well as through our own investments, commitments, and actions across areas that help drive positive outcomes for our communities. Our people and culture Our inclusive culture empowers our people to be who they are, contribute their unique perspectives, and make a difference individually and collectively. It enables us to leverage different ideas and perspectives, and bring more creativity and innovation to help solve our clients' most complex challenges. This makes Deloitte one of the most rewarding places to work. Professional development At Deloitte, professionals have the opportunity to work with some of the best and discover what works best for them. Here, we prioritize professional growth, offering diverse learning and networking opportunities to help accelerate careers and enhance leadership skills. Our state-of-the-art DU: The Leadership Center in India, located in Hyderabad, represents a tangible symbol of our commitment to the holistic growth and development of our people. Explore DU: The Leadership Center in India . Benefits To Help You Thrive At Deloitte, we know that great people make a great organization. Our comprehensive rewards program helps us deliver a distinctly Deloitte experience that helps that empowers our professionals to thrive mentally, physically, and financially—and live their purpose. To support our professionals and their loved ones, we offer a broad range of benefits. Eligibility requirements may be based on role, tenure, type of employment and/ or other criteria. Learn more about what working at Deloitte can mean for you. Recruiting tips From developing a stand out resume to putting your best foot forward in the interview, we want you to feel prepared and confident as you explore opportunities at Deloitte. Check out recruiting tips from Deloitte recruiters. Requisition code: 303777
Posted 3 weeks ago
2.0 - 3.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Summary Position Summary AI & Data In this age of disruption, organizations need to navigate the future with confidence, embracing decision making with clear, data-driven choices that deliver enterprise value in a dynamic business environment. The AI & Data team leverages the power of data, analytics, robotics, science and cognitive technologies to uncover hidden relationships from vast troves of data, generate insights, and inform decision-making. The offering portfolio helps clients transform their business by architecting organizational intelligence programs and differentiated strategies to win in their chosen markets. AI & Data will work with our clients to: Implement large-scale data ecosystems including data management, governance and the integration of structured and unstructured data to generate insights leveraging cloud-based platforms Leverage automation, cognitive and science-based techniques to manage data, predict scenarios and prescribe actions Drive operational efficiency by maintaining their data ecosystems, sourcing analytics expertise and providing As-a-Service offerings for continuous insights and improvements AWS Analyst The position is suited for individuals who have the ability to work in a constantly challenging environment and deliver effectively and efficiently. The individual will need to be adaptive and able to react quickly to changing business needs. Work you’ll do Planning, designing and developing cloud-based applications Work in tandem with engineering team to identify and implement the most optimal cloud-based solutions Design and deploy enterprise-wide scalable operations on Cloud Platforms Deploy and debug cloud applications in accordance with best practices throughout the development lifecycle Provides administration for cloud deployments and assures the environments are appropriately configured and maintained. Monitors the environment stability and responds to any issues or service requests for the environment. Educate teams on the implementation of new cloud-based initiatives, providing associated training as required Exceptional problem-solving skills, with the ability to see and solve issues Building and designing web services in the cloud, along with implementing the set-up of geographically redundant services. Orchestrating and automating cloud-based platforms Continuously monitor the system effectiveness and performance and identify the areas for improvement, collaborating with key stakeholders Provide guidance and coaching to the team members as required and also contribute to documenting cloud operations playbook and providing thought leadership in development automation, CI/CD Involve in providing insights for optimization of cloud computing costs Required : 2-3 Years of technology Consulting experience A minimum of 2 Years of experience in Cloud Operations High degree of knowledge using AWS services like lambda, GLUE, S3, Redshift, SNS, SQS and more. Strong scripting experience with python and ability to write SQL queries and string analytical skills. Experience working on CICD/DevOps is nice to have. Proven experience with agile/iterative methodologies implementing Cloud projects. Ability to translate business requirements and technical requirements into technical design. Good knowledge of end to end project delivery methodology implementing Cloud projects. Strong UNIX operating system concepts and shell scripting knowledge Good knowledge of cloud computing technologies and current computing trends. Effective communication skills (written and verbal) to properly articulate complicated cloud reports to management and other IT development partners. Ability to operate independently with clear focus on schedule and outcomes. Experience with algorithm development, including statistical and probabilistic analysis, clustering, recommendation systems, natural language processing, and performance analysis. Our purpose Deloitte’s purpose is to make an impact that matters for our people, clients, and communities. At Deloitte, purpose is synonymous with how we work every day. It defines who we are. Our purpose comes through in our work with clients that enables impact and value in their organizations, as well as through our own investments, commitments, and actions across areas that help drive positive outcomes for our communities. Our people and culture Our inclusive culture empowers our people to be who they are, contribute their unique perspectives, and make a difference individually and collectively. It enables us to leverage different ideas and perspectives, and bring more creativity and innovation to help solve our clients' most complex challenges. This makes Deloitte one of the most rewarding places to work. Professional development At Deloitte, professionals have the opportunity to work with some of the best and discover what works best for them. Here, we prioritize professional growth, offering diverse learning and networking opportunities to help accelerate careers and enhance leadership skills. Our state-of-the-art DU: The Leadership Center in India, located in Hyderabad, represents a tangible symbol of our commitment to the holistic growth and development of our people. Explore DU: The Leadership Center in India . Benefits To Help You Thrive At Deloitte, we know that great people make a great organization. Our comprehensive rewards program helps us deliver a distinctly Deloitte experience that helps that empowers our professionals to thrive mentally, physically, and financially—and live their purpose. To support our professionals and their loved ones, we offer a broad range of benefits. Eligibility requirements may be based on role, tenure, type of employment and/ or other criteria. Learn more about what working at Deloitte can mean for you. Recruiting tips From developing a stand out resume to putting your best foot forward in the interview, we want you to feel prepared and confident as you explore opportunities at Deloitte. Check out recruiting tips from Deloitte recruiters. Requisition code: 303777
Posted 3 weeks ago
2.0 - 3.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Summary Position Summary AI & Data In this age of disruption, organizations need to navigate the future with confidence, embracing decision making with clear, data-driven choices that deliver enterprise value in a dynamic business environment. The AI & Data team leverages the power of data, analytics, robotics, science and cognitive technologies to uncover hidden relationships from vast troves of data, generate insights, and inform decision-making. The offering portfolio helps clients transform their business by architecting organizational intelligence programs and differentiated strategies to win in their chosen markets. AI & Data will work with our clients to: Implement large-scale data ecosystems including data management, governance and the integration of structured and unstructured data to generate insights leveraging cloud-based platforms Leverage automation, cognitive and science-based techniques to manage data, predict scenarios and prescribe actions Drive operational efficiency by maintaining their data ecosystems, sourcing analytics expertise and providing As-a-Service offerings for continuous insights and improvements AWS Analyst The position is suited for individuals who have the ability to work in a constantly challenging environment and deliver effectively and efficiently. The individual will need to be adaptive and able to react quickly to changing business needs. Work you’ll do Planning, designing and developing cloud-based applications Work in tandem with engineering team to identify and implement the most optimal cloud-based solutions Design and deploy enterprise-wide scalable operations on Cloud Platforms Deploy and debug cloud applications in accordance with best practices throughout the development lifecycle Provides administration for cloud deployments and assures the environments are appropriately configured and maintained. Monitors the environment stability and responds to any issues or service requests for the environment. Educate teams on the implementation of new cloud-based initiatives, providing associated training as required Exceptional problem-solving skills, with the ability to see and solve issues Building and designing web services in the cloud, along with implementing the set-up of geographically redundant services. Orchestrating and automating cloud-based platforms Continuously monitor the system effectiveness and performance and identify the areas for improvement, collaborating with key stakeholders Provide guidance and coaching to the team members as required and also contribute to documenting cloud operations playbook and providing thought leadership in development automation, CI/CD Involve in providing insights for optimization of cloud computing costs Required : 2-3 Years of technology Consulting experience A minimum of 2 Years of experience in Cloud Operations High degree of knowledge using AWS services like lambda, GLUE, S3, Redshift, SNS, SQS and more. Strong scripting experience with python and ability to write SQL queries and string analytical skills. Experience working on CICD/DevOps is nice to have. Proven experience with agile/iterative methodologies implementing Cloud projects. Ability to translate business requirements and technical requirements into technical design. Good knowledge of end to end project delivery methodology implementing Cloud projects. Strong UNIX operating system concepts and shell scripting knowledge Good knowledge of cloud computing technologies and current computing trends. Effective communication skills (written and verbal) to properly articulate complicated cloud reports to management and other IT development partners. Ability to operate independently with clear focus on schedule and outcomes. Experience with algorithm development, including statistical and probabilistic analysis, clustering, recommendation systems, natural language processing, and performance analysis. Our purpose Deloitte’s purpose is to make an impact that matters for our people, clients, and communities. At Deloitte, purpose is synonymous with how we work every day. It defines who we are. Our purpose comes through in our work with clients that enables impact and value in their organizations, as well as through our own investments, commitments, and actions across areas that help drive positive outcomes for our communities. Our people and culture Our inclusive culture empowers our people to be who they are, contribute their unique perspectives, and make a difference individually and collectively. It enables us to leverage different ideas and perspectives, and bring more creativity and innovation to help solve our clients' most complex challenges. This makes Deloitte one of the most rewarding places to work. Professional development At Deloitte, professionals have the opportunity to work with some of the best and discover what works best for them. Here, we prioritize professional growth, offering diverse learning and networking opportunities to help accelerate careers and enhance leadership skills. Our state-of-the-art DU: The Leadership Center in India, located in Hyderabad, represents a tangible symbol of our commitment to the holistic growth and development of our people. Explore DU: The Leadership Center in India . Benefits To Help You Thrive At Deloitte, we know that great people make a great organization. Our comprehensive rewards program helps us deliver a distinctly Deloitte experience that helps that empowers our professionals to thrive mentally, physically, and financially—and live their purpose. To support our professionals and their loved ones, we offer a broad range of benefits. Eligibility requirements may be based on role, tenure, type of employment and/ or other criteria. Learn more about what working at Deloitte can mean for you. Recruiting tips From developing a stand out resume to putting your best foot forward in the interview, we want you to feel prepared and confident as you explore opportunities at Deloitte. Check out recruiting tips from Deloitte recruiters. Requisition code: 303777
Posted 3 weeks ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
27534 Jobs | Dublin
Wipro
14175 Jobs | Bengaluru
Accenture in India
9809 Jobs | Dublin 2
EY
9787 Jobs | London
Amazon
7964 Jobs | Seattle,WA
Uplers
7749 Jobs | Ahmedabad
IBM
7414 Jobs | Armonk
Oracle
7069 Jobs | Redwood City
Muthoot FinCorp (MFL)
6164 Jobs | New Delhi
Capgemini
5421 Jobs | Paris,France