Home
Jobs

5359 Tensorflow Jobs - Page 34

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

6.0 years

0 Lacs

Mumbai, Maharashtra, India

On-site

Job description Data Science SME Location Lower Parel, Mumbai (Onsite) US HOURS Job Requirements: • 6 to 7 years of experience manipulating data sets and building statistical and machine learning models. • Masters or Ph D in Statistics, Mathematics, Computer Science, or another quantitative field - Fluent English (written/spoken) • Experience Developing Machine Learning / Data Science models, from coding to deployment • 2+ years of experience in teaching or training. • 3+ Years of Hands-on Hybrid Development experience preferred. Skills • Able to train/mentor/coach in coding (mandatory python and SQL, java or C++) • Project Management background preferred. • Knowledge of the Consulting/Sales structure. • Empathy and service attitude • Fast-paced • Project Management experience • Desirable previous international experience (US, Canada, or Europe) • Leading consultants to grow and create tangible benefits and assets. • Competencies Mentor / Develop / Train consultants Orientation to results Leadership Main responsibilities of the position • Collecting data through means such as analyzing business results or by setting up and managing new studies • Transferring data into a new format to make it more appropriate for analysis • Build tools to automate data collection • Compare and analyze provided statistical information to identify patterns, relationships, and problems • Define and utilize statistical methods to solve industry-specific problems in varying fields, such as economics and engineering • Prepare detailed reports for management and other departments by analyzing and interpreting data • Train assistants and other members of the team how to properly organize findings and read data collected • Design computer code using various languages to improve and update software and applications • Refer to previous instances and findings to determine the ideal method for gathering data • Define and utilize statistical methods to solve industry-specific problems in varying fields, such as economics and engineering • Prepare detailed reports for management and other departments by analyzing and interpreting data • Train assistants and other members of the team how to properly organize findings and read data collected • Design computer code using various languages to improve and update software and applications • Refer to previous instances and findings to determine the ideal method for gathering data Desired Skills (Including but Not Limited to): • Knowledge in Deep Learning/Neural Networks techniques, specifically NLP (Natural Language Processing, Generative AI and Computer Vision • Python and SQL coding skills are indispensable • Cloud experience in one of AWS - Amazon Web Service, Azure, Google Cloud Platform • Proficiency oinn Machine Learning libraries and frameworks like Tensorflow, Keras, Pytorch, OpenCV, Bertl, Elmo SpaCy, NLTK, etc. • Preferred- Experience creating Chatbots, and similar applications that use NLP. Object Character Recognition and Computer Vision projects like Face Recognition is a plus • Experience using statistical computer languages, including Python & SQL, R is a plus to manipulate data and draw insights from large data sets • Knowledge and experience in statistical and data mining techniques: GLM / Regression, Random Forest, Boosting, Trees, text mining, social network analysis, etc. • Experience creating and using advanced machine learning algorithms and statistics: regression, simulation, scenario analysis, modeling, clustering, decision trees, neural networks, etc. • Ability to compile and organize statistical information retrieved and present findings to management • Faculty to work toward multiple deadlines simultaneously • Strong problem-solving skills with an emphasis on product development. • Certification in a Cloud-Based/Machine Learning service desirable

Posted 6 days ago

Apply

3.0 - 6.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Summary Position Summary CORE BUSINESS OPERATIONS The Core Business Operations (CBO) portfolio is an integrated set of offerings that addresses our clients’ heart-of-the-business issues. This portfolio combines our functional and technical capabilities to help clients transform, modernize, and run their existing technology platforms across industries. As our clients navigate dynamic and disruptive markets, these solutions are designed to help them drive product and service innovation, improve financial performance, accelerate speed to market, and operate their platforms to innovate continuously. ROLE Level: Consultant As a Consultant at Deloitte Consulting, you will be responsible for individually delivering high quality work product within due timelines in agile framework. On requirement basis consultants will be mentoring and/or directing junior team members/liaising with onsite/offshore teams to understand the functional requirements. Python developers can take on various job roles, such as back-end web developer, data scientist, automation engineer and machine learning engineer. The work you will do includes: Work on various software projects using Python as their primary programming language. Get involved in developing desktop applications, command-line tools, automation scripts, or backend services. Responsibilities include writing clean, efficient, and maintainable code, collaborating with other team members, and participating in the software development lifecycle. Build web applications using Python along with other technologies such as Django, Flask, or FastAPI to build dynamic websites and web applications. Work on back-end aspects of web development, implementing features, optimizing performance, and ensuring the security of web applications Develop software solutions using industry standard delivery methodologies like Agile, Waterfall across different architectural patterns Write clean, efficient, and well-documented code maintaining industry and client standards ensuring code quality and code coverage adherence as well as debugging and resolving any issues/defects Participate in delivery process like Agile development and actively contributing to sprint planning, daily stand-ups, and retrospectives Resolve issues or incidents reported by end users and escalate any quality issues or risks with team leads/scrum masters/project leaders Develop expertise in end-to-end construction cycle starting from Design (low level and high level), coding, unit testing, deployment and defect fixing along with coordinating with multiple stakeholders Qualifications Skills / Project Experience: Must Have: 3 - 6 years of hands-on experience in writing scalable code in Python programming language with the knowledge of at least one Python web framework like Django, Flask, Fast APIs, etc. Experience in Python software development stacks, ecosystems, frameworks and tools such as Numpy, Scipy, Pandas, Dask, spaCy, NLTK, sci-kit-learn and PyTorch Experience in database technologies such as SQL and NoSQL with ORM implementations. Experience in event-driven programming in Python Proficient understanding of code versioning tools such as Git, SVN, Bitbucket etc. Experience in any one of the cloud computing platforms like AWS, Azure or GCP Experience in Agile/SAFE Agile project development methodologies Good interpersonal and communication skills Flexibility to adapt and apply innovation to varied business domain and apply technical solutioning and learnings to use cases across business domains and industries Knowledge and experience working with Microsoft Office tools Good to Have: Proficiency in Python and machine learning libraries, experience with deep learning frameworks (e.g., TensorFlow, PyTorch), knowledge of software engineering principles, understanding of cloud computing platforms and deployment pipelines, familiarity with DevOps practices Understanding of Understanding of Experience in Experience with Experience in Education: B.E./B. Tech/M.C.A./M.Sc (CS) degree or equivalent from accredited university Prior Experience: 3 – 6 years of experience working with Python Location: The team Deloitte Consulting LLP’s Technology Consulting practice is dedicated to helping our clients build tomorrow by solving today’s complex business problems involving strategy, procurement, design, delivery, and assurance of technology solutions. Our service areas include analytics and information management, delivery, cyber risk services, and technical strategy and architecture, as well as the spectrum of digital strategy, design, and development services Core Business Operations Practice optimizes clients’ business operations and helps them take advantage of new technologies. Drives product and service innovation, improves financial performance, accelerates speed to market, and operates client platforms to innovate continuously. Learn more about our Technology Consulting practice on www.deloitte.com For information on CBO visit - https://www.youtube.com/watch?v=L1cGlScLuX0 For information on life of an Analyst at CBO visit- https://www.youtube.com/watch?v=CMe0DkmMQHI Our purpose Deloitte’s purpose is to make an impact that matters for our people, clients, and communities. At Deloitte, purpose is synonymous with how we work every day. It defines who we are. Our purpose comes through in our work with clients that enables impact and value in their organizations, as well as through our own investments, commitments, and actions across areas that help drive positive outcomes for our communities. Our people and culture Our inclusive culture empowers our people to be who they are, contribute their unique perspectives, and make a difference individually and collectively. It enables us to leverage different ideas and perspectives, and bring more creativity and innovation to help solve our clients' most complex challenges. This makes Deloitte one of the most rewarding places to work. Professional development At Deloitte, professionals have the opportunity to work with some of the best and discover what works best for them. Here, we prioritize professional growth, offering diverse learning and networking opportunities to help accelerate careers and enhance leadership skills. Our state-of-the-art DU: The Leadership Center in India, located in Hyderabad, represents a tangible symbol of our commitment to the holistic growth and development of our people. Explore DU: The Leadership Center in India . Benefits To Help You Thrive At Deloitte, we know that great people make a great organization. Our comprehensive rewards program helps us deliver a distinctly Deloitte experience that helps that empowers our professionals to thrive mentally, physically, and financially—and live their purpose. To support our professionals and their loved ones, we offer a broad range of benefits. Eligibility requirements may be based on role, tenure, type of employment and/ or other criteria. Learn more about what working at Deloitte can mean for you. Recruiting tips From developing a stand out resume to putting your best foot forward in the interview, we want you to feel prepared and confident as you explore opportunities at Deloitte. Check out recruiting tips from Deloitte recruiters. Requisition code: 300298

Posted 6 days ago

Apply

7.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Statistics & Data Corporation (SDC), a specialized contract research organization (CRO) headquartered in Arizona, delivering top-tier clinical trial services to pharmaceutical, biologic, and medical device/diagnostic companies since 2005. SDC providing a technology enabled service offering to provide clients with both clinical services expertise, as well as the technology they need to be successful in their clinical trials. Job Summary Direct the development and maintenance of the required systems, infrastructure, and personnel to enable reliable, cost effective, automation and machine learning solutions available within SDC and to external partners and clients. Ensure systems and applications have appropriate validation documentation in place according to standard operating procedures. Maintain portfolio of requested and completed applications, including metrics around resource needs (internal and external to DI&A), and intended goal of the application to enable discussion around prioritization. Develop agile development practices on large initiatives to ensure alignment with the specific company goals. Participate in the identification of cross department company goals and initiatives. Oversee the continued effort to clean, process, store and analyze clinical data with its application towards research and development of Artificial Intelligence (AI) and Machine Learning (ML) algorithms supporting better understanding of the safety and efficacy of new therapies. Contribute to the organization's strong drive to be at forefront of using AI in clinical trials to simplify data processing and discover imperceptible correlations. Lead and manage software development activities to drive automation across SDC departments reducing error and increasing efficiency across those departments. Serve as the primary point within the company driving standardization. Primary Responsibilities Oversee day to data activities involving data science, data engineering, automation, and business intelligence Develop standard metrics demonstrating model performance, robustness, and validity Engage with various internal departments such as data management, technology solutions, biostatistics, statistical programming, project management, IT, and business development to strategically identify areas to apply AI and/or ML toward increasing the efficiency of processes and detecting information trends Ensure that algorithms can be practically deployed with minimal resource usage and maximum ease of use in existing architecture Oversee the delivery of high-quality software design documentation Develop ML models which autonomously identify inconsistent information in patient data and discover the correlations between patient data and medically relevant conditions such as a specific adverse event or high efficaciousness Prototype new ideas/technologies to create proof of concept and demos Develop standard operating procedures for the use of artificial intelligence within clinical trials Develop and communicate roadmap of activities and updates quarterly to executive management Ensure timelines and delivery are met and raise issues early for any risks Perform other related duties incidental to the work described herein Adherence to all essential systems and processes that are required at SDC to maintain compliance to business and regulatory requirements The above statements describe the general nature and level of work being performed by individuals assigned to this classification. This document is not intended to be an exhaustive list of all responsibilities and duties required of personnel so classified. Requirements Required Skills Experience with an AI framework such as{​{:}} Tensorflow, MXNet, Theano, Keras, Pytorch, Caff e Experience in Pytho n Experience in the software development process{​{:}} Evidence of delivering production level co de Familiarity with AI algorithms for problems, including natural language processing, classification, clustering, dimensionality reduction, anomaly detecti on Experience in applying AI to biomedical data analysis preferr ed Ability to develop and deliver presentatio ns Ability to communicate effectively in writing and verbal ly Ability to identify issues, present problems, and implement solutio ns Capability of communicating technical concepts clearly, concisely, and understandably to non-technical colleagu es Good leadership, organizational and time management skills, with the ability to multi-ta sk Strong interpersonal communication and presentation skil lsEducation Or Equivalent Experien ce A Master's degree in information and data science, operations research, mathematics, statistics, computer science or other related field, with focus in artificial intelligence preferred, and at least 7+ years of relevant professional experience or a Bachelor's degree with 10+ years of relevant professional experien ceBenefi tsWhy S DCSDC is a team of diversified professionals who deliver exceptional Biometric Services, Consulting, and Technology Solutions to pharmaceutical, biologic, and medical device/diagnostic companies. Since 2005 our purpose has been to partner with sponsors to provide high quality and experienced team members to develop great medicines that save lives and cure diseases in the most efficient manner possible. Our global team operates as a value partner to our clients by fulfilling their needs as our own and delivering exceptional results. We are a specialty CRO in that we provide scalable service offerings, focused services area specialists, efficient project timelines, optimal technology solutions, and proven success and experience. Our commitment to our clients is the same commitment to our employees. By offering strong benefits including competitive pay, generous time off, attainable career advances and positive work/life balance, we are able to attract some of the most talented people in the industr y. We are committed to developing our employees. We recognize achievements, provide growth opportunities and career advancement, offer a flexible work schedule, engaging work culture and employee benefi ts We are passionate about our company culture. Our recognition program is directly tied to our core values of Energy, Integrity, Engagement, Innovation, Ownership, and Commitme nt We strive to provide a place of belonging to our employees with fun and engaging activities from SDC's culture cl ub We are constantly growing and innovating to support our client and employee needs. Global in nature, we bring diverse perspectives enabling our growth in this ever-evolving indust ry With a proven track record, SDC has been successfully executing client clinical programs since 20 05Take a look at how you can join our tea m!

Posted 6 days ago

Apply

3.0 - 6.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Summary Position Summary CORE BUSINESS OPERATIONS The Core Business Operations (CBO) portfolio is an integrated set of offerings that addresses our clients’ heart-of-the-business issues. This portfolio combines our functional and technical capabilities to help clients transform, modernize, and run their existing technology platforms across industries. As our clients navigate dynamic and disruptive markets, these solutions are designed to help them drive product and service innovation, improve financial performance, accelerate speed to market, and operate their platforms to innovate continuously. ROLE Level: Consultant As a Consultant at Deloitte Consulting, you will be responsible for individually delivering high quality work product within due timelines in agile framework. On requirement basis consultants will be mentoring and/or directing junior team members/liaising with onsite/offshore teams to understand the functional requirements. Python developers can take on various job roles, such as back-end web developer, data scientist, automation engineer and machine learning engineer. The work you will do includes: Work on various software projects using Python as their primary programming language. Get involved in developing desktop applications, command-line tools, automation scripts, or backend services. Responsibilities include writing clean, efficient, and maintainable code, collaborating with other team members, and participating in the software development lifecycle. Build web applications using Python along with other technologies such as Django, Flask, or FastAPI to build dynamic websites and web applications. Work on back-end aspects of web development, implementing features, optimizing performance, and ensuring the security of web applications Develop software solutions using industry standard delivery methodologies like Agile, Waterfall across different architectural patterns Write clean, efficient, and well-documented code maintaining industry and client standards ensuring code quality and code coverage adherence as well as debugging and resolving any issues/defects Participate in delivery process like Agile development and actively contributing to sprint planning, daily stand-ups, and retrospectives Resolve issues or incidents reported by end users and escalate any quality issues or risks with team leads/scrum masters/project leaders Develop expertise in end-to-end construction cycle starting from Design (low level and high level), coding, unit testing, deployment and defect fixing along with coordinating with multiple stakeholders Qualifications Skills / Project Experience: Must Have: 3 - 6 years of hands-on experience in writing scalable code in Python programming language with the knowledge of at least one Python web framework like Django, Flask, Fast APIs, etc. Experience in Python software development stacks, ecosystems, frameworks and tools such as Numpy, Scipy, Pandas, Dask, spaCy, NLTK, sci-kit-learn and PyTorch Experience in database technologies such as SQL and NoSQL with ORM implementations. Experience in event-driven programming in Python Proficient understanding of code versioning tools such as Git, SVN, Bitbucket etc. Experience in any one of the cloud computing platforms like AWS, Azure or GCP Experience in Agile/SAFE Agile project development methodologies Good interpersonal and communication skills Flexibility to adapt and apply innovation to varied business domain and apply technical solutioning and learnings to use cases across business domains and industries Knowledge and experience working with Microsoft Office tools Good to Have: Proficiency in Python and machine learning libraries, experience with deep learning frameworks (e.g., TensorFlow, PyTorch), knowledge of software engineering principles, understanding of cloud computing platforms and deployment pipelines, familiarity with DevOps practices Understanding of Understanding of Experience in Experience with Experience in Education: B.E./B. Tech/M.C.A./M.Sc (CS) degree or equivalent from accredited university Prior Experience: 3 – 6 years of experience working with Python Location: The team Deloitte Consulting LLP’s Technology Consulting practice is dedicated to helping our clients build tomorrow by solving today’s complex business problems involving strategy, procurement, design, delivery, and assurance of technology solutions. Our service areas include analytics and information management, delivery, cyber risk services, and technical strategy and architecture, as well as the spectrum of digital strategy, design, and development services Core Business Operations Practice optimizes clients’ business operations and helps them take advantage of new technologies. Drives product and service innovation, improves financial performance, accelerates speed to market, and operates client platforms to innovate continuously. Learn more about our Technology Consulting practice on www.deloitte.com For information on CBO visit - https://www.youtube.com/watch?v=L1cGlScLuX0 For information on life of an Analyst at CBO visit- https://www.youtube.com/watch?v=CMe0DkmMQHI Our purpose Deloitte’s purpose is to make an impact that matters for our people, clients, and communities. At Deloitte, purpose is synonymous with how we work every day. It defines who we are. Our purpose comes through in our work with clients that enables impact and value in their organizations, as well as through our own investments, commitments, and actions across areas that help drive positive outcomes for our communities. Our people and culture Our inclusive culture empowers our people to be who they are, contribute their unique perspectives, and make a difference individually and collectively. It enables us to leverage different ideas and perspectives, and bring more creativity and innovation to help solve our clients' most complex challenges. This makes Deloitte one of the most rewarding places to work. Professional development At Deloitte, professionals have the opportunity to work with some of the best and discover what works best for them. Here, we prioritize professional growth, offering diverse learning and networking opportunities to help accelerate careers and enhance leadership skills. Our state-of-the-art DU: The Leadership Center in India, located in Hyderabad, represents a tangible symbol of our commitment to the holistic growth and development of our people. Explore DU: The Leadership Center in India . Benefits To Help You Thrive At Deloitte, we know that great people make a great organization. Our comprehensive rewards program helps us deliver a distinctly Deloitte experience that helps that empowers our professionals to thrive mentally, physically, and financially—and live their purpose. To support our professionals and their loved ones, we offer a broad range of benefits. Eligibility requirements may be based on role, tenure, type of employment and/ or other criteria. Learn more about what working at Deloitte can mean for you. Recruiting tips From developing a stand out resume to putting your best foot forward in the interview, we want you to feel prepared and confident as you explore opportunities at Deloitte. Check out recruiting tips from Deloitte recruiters. Requisition code: 300298

Posted 6 days ago

Apply

5.0 - 8.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Company Description Syngenta is one of the world’s leading agriculture innovation company (Part of Syngenta Group) dedicated to improving global food security by enabling millions of farmers to make better use of available resources. Through world class science and innovative crop solutions, our 60,000 people in over 100 countries are working to transform how crops are grown. We are committed to rescuing land from degradation, enhancing biodiversity and revitalizing rural communities. A diverse workforce and an inclusive workplace environment are enablers of our ambition to be the most collaborative and trusted team in agriculture. Our employees reflect the diversity of our customers, the markets where we operate and the communities which we serve. No matter what your position, you will have a vital role in safely feeding the world and taking care of our planet. To learn more visit: www.syngenta.com Job Description Position Overview: We are seeking an experienced and technically proficient Senior Software Engineer to join our dynamic Digital AgTech team in Pune. As a Senior Software Engineer, you will lead the development of cutting-edge digital agriculture solutions, architect data pipelines using Databricks, and drive innovation in our cloud-native products that shape the future of global agriculture. Key Responsibilities Design and implement scalable data processing pipelines using Databricks Develop and maintain ETL workflows using Databricks Delta Lake Design and implement ML model serving pipelines for production environments Architect and deploy cloud-native solutions leveraging AWS services Provide technical leadership and mentoring to junior team members Drive best practices in code quality, testing, and deployment automation Collaborate with data scientists to optimize model deployment workflows Contribute to technical decision-making and architecture discussions Qualifications Bachelor or Master degree in Computer Science, Software Engineering, or related field 5-8 years of experience in software development with focus on Python Minimum 1 year hands-on experience with Databricks platform Strong expertise in: Databricks, Delta Lake, and Apache Spark Python and PySpark AWS services (ECS, Lambda, S3, DynamoDB, SQS, etc.) Apache Kafka and message queuing systems ML model serving and deployment pipelines Container technologies (Docker, Kubernetes) Experience with CI/CD pipelines and DevOps practices Strong system design and problem-solving skills Excellent communication and leadership abilities Preferred Skills Experience with microservices architecture and RESTful APIs Experience with ML frameworks (TensorFlow, PyTorch, scikit-learn) Knowledge of ML model optimization and scaling Experience with real-time prediction serving systems Experience with both Kafka and SQS in production environments Knowledge of AgTech domain or similar industry Experience with infrastructure as code (Terraform, CloudFormation) Additional Information Note: Syngenta is an Equal Opportunity Employer and does not discriminate in recruitment, hiring, training, promotion or any other employment practices for reasons of race, color, religion, gender, national origin, age, sexual orientation, gender identity, marital or veteran status, disability, or any other legally protected status. Follow us on: Twitter & LinkedIn https://twitter.com/SyngentaAPAC https://www.linkedin.com/company/syngenta/ India page https://www.linkedin.com/company/70489427/admin/

Posted 6 days ago

Apply

1.0 - 6.0 years

16 - 20 Lacs

Hyderabad

Work from Office

AI/Machine Learning Engineer Skillsoft is the global leader in eLearning. Trusted by the world's leading organizations, including 65% of the Fortune 500. Our 100,000+ courses, videos and books are accessed over 100 million times every month, across more than 100 countries. At Skillsoft, we believe knowledge is the fuel for innovation and innovation is the fuel for business growth. Join us in our quest to democratize learning and help individuals unleash their edge. This role is for a junior to mid-level engineer. The successful candidate will work closely with more senior engineers who will mentor and guide them. You’ll dive into the heart of innovation, crafting intelligent systems that empower millions worldwide. From designing generative AI solutions to pioneering agentic workflows, you’ll collaborate with internal and external teams to transform knowledge into a catalyst for growth—unleashing your edge while helping others do the same. Responsibilities: Hands-on AI/ML software engineer Work with product owners and curators to understand requirements and guide new features Collaborate to identify new feature impacts on existing services and teams Research, prototype, and select appropriate COTS and in-house AI/ML technology and design Collaborate with team to design, develop, and maintain supporting systems Document designs and implementation to ensure consistency and alignment with standards Create documentation including system and sequence diagrams Create appropriate data pipelines for AI/ML Analyze, curate, cleanse, and split data Utilize and apply generative AI for products and for daily productivity Periodically explore new technologies and design patterns with proof-of-concept Participate in developing best practices and improving operational processes Present research and work to socialize and share knowledge across the organization Environment, Tools & Technologies: Agile/Scrum Operating Systems – Mac, Linux Python, JavaScript, Node.js PyTorch, Tensorflow, Keras, LLMs (OpenAI GPT-X, Claude, embedding models) Vector indexing/database, LangChain, AutoGen, RAG, Agents APIs GraphQL, REST Docker, Kubernetes Amazon Web Services (AWS), MS Azure OpenAI SQL (Postgres RDS/Aurora), NoSQL (Cassandra, Elasticsearch) Messaging – Kafka, RabbitMQ, SQS Monitoring – New Relic, Splunk GitHub, IDE (your choice) Skills & Qualifications: 2 years of relevant work experience and experience with generative AI and fine-tuning models Ability to design and document APIs, data models, service interactions Familiarity with and ability to explain: o testing strategies – unit, integration, system o system and API security techniques o data privacy concerns o microservices architecture o vertical vs horizontal scaling o Generative AI, NLP, DNN, LLMs, auto-encoders, etc. Attributes for Success: Proactive, Independent, Adaptable Collaborative team player Customer service minded with an ownership mindset Excellent analytic and communication skills Ability and desire to coach and mentor new and more junior developers Innovative and problem-solving mindset Passionate, curious, open to new ideas, and ability to research and learn new technologies

Posted 6 days ago

Apply

7.5 years

0 Lacs

India

Remote

Who We Are At Twilio, we’re shaping the future of communications, all from the comfort of our homes. We deliver innovative solutions to hundreds of thousands of businesses and empower millions of developers worldwide to craft personalized customer experiences. Our dedication to remote-first work, and strong culture of connection and global inclusion means that no matter your location, you’re part of a vibrant team with diverse experiences making a global impact each day. As we continue to revolutionize how the world interacts, we’re acquiring new skills and experiences that make work feel truly rewarding. Your career at Twilio is in your hands. See yourself at Twilio Join the team as Twilio’s next Staff Machine Learning Engineer. About The Job This position is needed to scope, design, and deploy machine learning systems into the real world, the individual will closely partner with Product & Engineering teams to execute the roadmap for Twilio’s AI/ML products and services. You will understand customers need, build data products that works at a global scale and own end-to-end execution of large scale ML solutions. To thrive in this role, you must have a deep background in ML engineering, and a consistent track record of solving data & machine-learning problems at scale. You are a self-starter, embody a growth attitude, and collaborate effectively across organization. Responsibilities In this role, you’ll: Build and maintain scalable machine learning solutions in production Train and validate both deep learning-based and statistical-based models considering use-case, complexity, performance, and robustness Demonstrate end-to-end understanding of applications and develop a deep understanding of the “why” behind our models & systems Partner with product managers, tech leads, and stakeholders to analyze business problems, clarify requirements and define the scope of the systems needed Work closely with data platform teams to build robust scalable batch and realtime data pipelines Collaborate with software engineers, build tools to enhance productivity and to ship and maintain ML models Drive high engineering standards on the team through mentoring and knowledge sharing Uphold engineering best practices around code reviews, automated testing and monitoring. Qualifications Twilio values diverse experiences in other industries, and we encourage everyone who meets the required qualifications to apply. While having “desired” qualifications make for a strong candidate, we encourage applicants with alternative experiences to also apply. If your career is just starting or hasn't followed a traditional path, don't let that stop you from considering Twilio. We are always looking for people who will bring something new to the table! Required: 7.5+ years of applied ML experience with proficiency in Python Strong background in the foundations of Machine Learning and building blocks of modern Deep Learning Track record of building, shipping and maintaining Machine Learning models in production in an ambiguous and fast paced environment. Track record of designing and architecting large scale experiments and analysis to inform product roadmap. You have a clear understanding of frameworks like - PyTorch, TensorFlow, or Keras, why and how these frameworks do what they do Familiarity with ML Ops concepts related to testing and maintaining models in production such as testing, retraining, and monitoring. Demonstrated ability to ramp up, understand, and operate effectively in new application / business domains. You’ve explored modern data storage, messaging, and processing tools (Kafka, Apache Spark, Hadoop, Presto, DynamoDB etc.) and demonstrated experience designing and coding in big-data components such as DynamoDB or similar Experience working in an agile team environment with changing priorities Experience of working on AWS. Desired Experience with Large Language Models. Location This role will be remote, and based in India . Travel We prioritize connection and opportunities to build relationships with our customers and each other. For this role, you may be required to travel occasionally to participate in project or team in-person meetings. What We Offer Working at Twilio offers many benefits, including competitive pay, generous time off, ample parental and wellness leave, healthcare, a retirement savings program, and much more. Offerings vary by location. Twilio thinks big. Do you? We like to solve problems, take initiative, pitch in when needed, and are always up for trying new things. That's why we seek out colleagues who embody our values — something we call Twilio Magic. Additionally, we empower employees to build positive change in their communities by supporting their volunteering and donation efforts. So, if you're ready to unleash your full potential, do your best work, and be the best version of yourself, apply now! If this role isn't what you're looking for, please consider other open positions. Twilio is proud to be an equal opportunity employer. We do not discriminate based upon race, religion, color, national origin, sex (including pregnancy, childbirth, reproductive health decisions, or related medical conditions), sexual orientation, gender identity, gender expression, age, status as a protected veteran, status as an individual with a disability, genetic information, political views or activity, or other applicable legally protected characteristics. We also consider qualified applicants with criminal histories, consistent with applicable federal, state and local law. Qualified applicants with arrest or conviction records will be considered for employment in accordance with the Los Angeles County Fair Chance Ordinance for Employers and the California Fair Chance Act. Additionally, Twilio participates in the E-Verify program in certain locations, as required by law.

Posted 6 days ago

Apply

2.0 - 5.0 years

0 Lacs

India

Remote

Who We Are At Twilio, we’re shaping the future of communications, all from the comfort of our homes. We deliver innovative solutions to hundreds of thousands of businesses and empower millions of developers worldwide to craft personalized customer experiences. Our dedication to remote-first work, and strong culture of connection and global inclusion means that no matter your location, you’re part of a vibrant team with diverse experiences making a global impact each day. As we continue to revolutionize how the world interacts, we’re acquiring new skills and experiences that make work feel truly rewarding. Your career at Twilio is in your hands. See yourself at Twilio Join the team as our next Machine Learning Engineer. About The Job This position is needed to scope, design, and deploy machine learning systems into the real world, the individual will closely partner with Product & Engineering teams to execute the roadmap for Twilio’s AI/ML products and services. You will understand customers' needs, build data products that work at a global scale and own end-to-end execution of large scale ML solutions. To thrive in this role, you must have a deep background in ML engineering, and a consistent track record of solving data & machine-learning problems at scale. You are a self-starter, embody a growth attitude, and collaborate effectively across organizations. Responsibilities In this role, you’ll: Build algorithms based on statistical modeling procedures and maintain scalable machine learning solutions in production Transform data science prototypes and applying appropriate ML algorithms and tools Work closely with the ML Engineers, build tools to enhance their productivity and to ship and maintain ML models Conduct exploratory data analysis (EDA) on large-scale datasets to identify patterns and extract features that directly support the problem statement. Manage the infrastructure and data pipelines needed to bring code to production Demonstrate end-to-end understanding of applications (including, but not limited to, the machine learning algorithms) being created Partner with product managers and architects to analyze business problems, clarify requirements and define the scope of the systems needed Use cloud platform AWS to handle larger scale data Support operational leaders by developing code to automate manual processes Drive high engineering standards on the team through code review, automated testing, and mentoring. Qualifications Twilio values diverse experiences in other industries, and we encourage everyone who meets the required qualifications to apply. While having “desired” qualifications make for a strong candidate, we encourage applicants with alternative experiences to also apply. If your career is just starting or hasn't followed a traditional path, don't let that stop you from considering Twilio. We are always looking for people who will bring something new to the table! Required: 2 - 5 years of applied ML experience in statistical and mathematical modeling such as supervised and unsupervised machine learning, deep learning Strong proficiency in Python to effectively analyze data and solve complex technical challenges. Track record of building, shipping and maintaining machine learning systems in a highly ambiguous and fast paced environment. You have a clear understanding of frameworks like - PyTorch, TensorFlow, or Keras, why and how these frameworks do what they do. Familiarity with concepts related to testing and maintaining models in production such as A/B testing, retraining, monitoring model performance You’ve explored modern data storage, messaging, and processing tools (Kafka, Apache Spark, Hadoop, Presto, DynamoDB etc.) and demonstrated experience designing and coding in big-data components such as DynamoDB or similar. Desired Experience with Large Language Models Location This role will be remote, and based in India . Travel We prioritize connection and opportunities to build relationships with our customers and each other. For this role, you may be required to travel occasionally to participate in project or team in-person meetings. What We Offer Working at Twilio offers many benefits, including competitive pay, generous time off, ample parental and wellness leave, healthcare, a retirement savings program, and much more. Offerings vary by location. Twilio thinks big. Do you? We like to solve problems, take initiative, pitch in when needed, and are always up for trying new things. That's why we seek out colleagues who embody our values — something we call Twilio Magic. Additionally, we empower employees to build positive change in their communities by supporting their volunteering and donation efforts. So, if you're ready to unleash your full potential, do your best work, and be the best version of yourself, apply now! If this role isn't what you're looking for, please consider other open positions. Twilio is proud to be an equal opportunity employer. We do not discriminate based upon race, religion, color, national origin, sex (including pregnancy, childbirth, reproductive health decisions, or related medical conditions), sexual orientation, gender identity, gender expression, age, status as a protected veteran, status as an individual with a disability, genetic information, political views or activity, or other applicable legally protected characteristics. We also consider qualified applicants with criminal histories, consistent with applicable federal, state and local law. Qualified applicants with arrest or conviction records will be considered for employment in accordance with the Los Angeles County Fair Chance Ordinance for Employers and the California Fair Chance Act. Additionally, Twilio participates in the E-Verify program in certain locations, as required by law.

Posted 6 days ago

Apply

4.0 - 8.0 years

10 - 20 Lacs

Hyderabad, Chennai, Bengaluru

Work from Office

Hiring for AI Developer Position: Full time location: Pan INDIA Intrested candidates share cv to mansoor@burgeonits.com Job Description: We are seeking an AI Developer experienced in designing, building, and deploying AI solutions using Microsoft Azure. The ideal candidate should be comfortable working with Azure AI services, MLOps tools, and Microsoft's machine learning ecosystem. Responsibilities: Design and develop AI/ML models using Azure Machine Learning Leverage Azure Cognitive Services (Vision, Speech, Language, etc.) to deliver intelligent features Develop and automate ML pipelines using Azure ML, Azure Data Factory, and Azure DevOps Work with Azure Databricks for data processing and model training Collaborate with business and tech stakeholders to deliver AI-powered solutions Required Skills: Strong knowledge of Python and Azure ML SDK Hands-on experience with Azure AI tools: Azure Machine Learning, Cognitive Services, Bot Framework Experience with MLOps, CI/CD in Azure (Pipelines, ML Ops) Proficiency in working with Azure Blob, Data Lake, and SQL Database Strong understanding of responsible AI principles and governance Kindly please fill the below details & share updated cv to mansoor@burgeonits.com Name as per Aadhar card Mobile no Alternate no Email id Alternate email (Because most of the mail is showing duplicate portal) Date Of Birth Pan card no(for client upload)mandatory* Total Exp Rev Exp Current company If any payroll (Name) Notice Period (If Serving any Np , Mention last working day & year) CCTC ECTC If Any offers (Yes/No) If yes how much offer &when joining date Current location & Preferred Location Happy to relocate(Yes/No) Available Interview time slots

Posted 6 days ago

Apply

15.0 years

0 Lacs

Navi Mumbai, Maharashtra, India

On-site

Position: Head of Engineering/Chief Technology Officer (CTO) Location : Navi Mumbai (Onsite) Engagement Type : Equity-Based (with minor monthly payouts to cover essentials) Experience Required : 8–15 Years Education : B.Tech from Tier-1 Institution preferred Start Date : Immediate Who We Are LinkCxO is a premium, AI-powered platform exclusively for senior and C-suite professionals. We bring together verified leadership jobs, curated business insights, and executive-only events—delivered in a single, trusted, and intelligent ecosystem. Why LinkCxO? 🌐 30,000+ Verified Senior Members 💼 10,000+ CxO-Level Jobs 📅 Executive-Only Events (Conferences, Webinars, Roundtables) 🧠 AI-Aggregated Business Insights from trusted sources 🤝 An Exclusive Platform for Collaboration, Hiring & Engagement What We’re Solving CxOs are overwhelmed with fragmented job listings, irrelevant events, and scattered insights—none of which are tailored to their leadership needs. Our Solution An AI-led, deeply personalized executive platform that integrates insights, opportunities, and connections—built by and for leaders. Role Overview We are looking for a Head of Engineering / Chief Technology Officer (CTO) —a rare blend of backend architect , hands-on developer , and Generative AI expert . You’ll own the tech roadmap, lead the development of a scalable platform infrastructure, and build cutting-edge AI products including LLMs, RAG pipelines, and agent-based systems that will power personalization and discovery at scale. You won’t just be building tech—you’ll be shaping the future of executive intelligence and engagement. Key Responsibilities 🏗️ Architecture & Platform Engineering Lead the design and architecture of a scalable, secure, and modular platform Oversee the development of backend systems using Golang, Echo, MongoDB, MySQL Implement robust microservices, APIs, and containerized infra using Docker, Kubernetes, Krakend 🧠 Generative AI & ML Integration Build, fine-tune, and deploy LLMs , Diffusion Models , RAG pipelines , and AI agents Leverage and integrate APIs from OpenAI, Hugging Face, LangChain, Anthropic , etc. Collaborate with data science and product teams to implement AI-powered recommendations, insights, and user flows Maintain performance, latency, and cost-effectiveness of production-grade AI models Ensure ethical, explainable AI use and governance across all systems ⚙️ Full-Stack Leadership Drive hands-on development, infrastructure scaling, and release cycles Manage auth/messaging systems with Keycloak and NATs Set up observability tools like Grafana, Prometheus, Loki for system health & performance 🚀 Strategic Ownership Translate business goals into a technical vision and roadmap Collaborate closely with Founders, Product, and Marketing to align GTM and tech execution Build and mentor a high-performing in-house and partner tech team Tech Stack Snapshot Backend : Golang, Echo Framework Messaging & Auth : NATs, Keycloak Databases : MongoDB, MySQL AI/ML : Python, PyTorch, TensorFlow, Hugging Face, LangChain, OpenAI APIs DevOps : Docker, Kubernetes, Krakend Monitoring : Grafana, Prometheus, Loki Who You Are ✅ 8–15 years of experience in backend development, platform architecture, and team leadership ✅ Proven experience working with LLMs , Generative AI , or MLOps pipelines ✅ Hands-on builder with a deep understanding of distributed systems & microservices ✅ Exposure to or interest in early-stage platforms/startups with high growth potential ✅ Comfortable in high-ownership, equity-first environments ✅ Passionate about AI innovation and building a lasting product legacy Leadership You’ll Work With 👔 Rajesh Padmanabhan – Chief Mentor Ex-CHRO at Capgemini, Vedanta | UN Awardee for D&I 💡 Gyanesh Kumar – Founder & CEO Ex-CBO at Shine.com | ISB & IIM-A alum | 20+ years in tech & leadership 🧠 Hema Gupta – Co-founder – CxO Hiring & Engagement 16+ years in executive hiring, automotive & industrial focus 📊 Sumit Jha – Advisor – Chief Analytics Officer IIT-Kharagpur, IIM Shillong | AI/ML expert across US, EU, India 🎯 Amitabh Kumar – Strategic Advisor & Executive Coach ICF-Certified | 800+ hours of coaching | Media-recognized mentor 👥 You – HOE | CTO | Architect of LinkCxO's Future What We Offer 📈 Co-Founder Equity in a mission-led, venture-scale startup 🧠 Tech Autonomy to own, build, and evolve the entire platform 💼 Minor Payouts to support basic operational needs 🚀 A Ground-Zero Opportunity to shape a category-defining product in AI & leadership

Posted 6 days ago

Apply

0.0 - 1.0 years

10 - 14 Lacs

Bengaluru

Work from Office

Job Title Embedded Voice & ML Applications Engineer. We are seeking a passionate and skilled Embedded Systems Engineer with expertise in Voice and Machine Learning (ML) applications. The ideal candidate will have experience in embedded software development, real-time systems, and ML model deployment on resource-constrained devices. Key Responsibilities. Develop and optimize embedded software using C programming and RTOS (e.g., FreeRTOS). Perform board bring-up and hardware-software integration for embedded platforms. Implement and optimize DSP/ISP algorithms with a focus on low-power consumption. Port and optimize ML models using frameworks like TensorFlow Lite Micro or PyTorch for real-time performance. Work on debugging and optimizing multicore applications and utilize Neural Processing Units (NPUs). Collaborate on ML model conversion and tuning to fit memory and real-time constraints of embedded devices. Understand and write linker scripts (GCC, IAR, ARM) for embedded systems. Contribute to the development of voice-based ML applications, including keyword spotting, speech recognition, and audio classification. . Strong proficiency in C programming for embedded systems. Hands-on experience with FreeRTOS or similar RTOS. Solid understanding of DSP, ISP, and embedded low power optimization techniques. Experience with ML frameworks such as TensorFlow Lite Micro or PyTorch for embedded systems. Familiarity with model quantization, pruning, and inference optimization. Experience with multicore processors and NPUs. Knowledge of linker scripts and memory layout for embedded toolchains (GCC/IAR/ARM). Strong debugging skills and use of tools like JTAG, logic analyzers, and oscilloscopes. Preferred Qualifications. Experience in voice/audio processing. Knowledge of Embedded Linux or bare-metal programming. Exposure to Edge AI and TinyML concepts. (ref:hirist.tech).

Posted 6 days ago

Apply

5.0 - 10.0 years

5 - 9 Lacs

Bengaluru

Work from Office

We are looking for an experienced Embedded Voice & ML Applications Engineer with a strong background in embedded systems and machine learning for edge devices. The role involves developing and optimizing real-time applications for voice and ML use cases on low-power, resource-constrained platforms. Key Responsibilities. Develop embedded software in C for voice and ML applications. Work with RTOS (FreeRTOS) for real-time processing. Perform board bring-up and low-level hardware interfacing. Optimize systems for low power and performance efficiency. Implement and port models using TensorFlow Lite Micro or PyTorch. Debug multicore applications and integrate with NPUs. Work with linker scripts (GCC, IAR, ARM) and embedded toolchains. Handle DSP/ISP processing and real-time data flows. Required Skills. 5+ years of experience in embedded development. Strong in C programming, RTOS, and hardware-level debugging. Experience with ML frameworks and real-time model deployment. Good understanding of multicore systems, NPU, and signal processing. Familiarity with embedded toolchains and linker scripts. (ref:hirist.tech).

Posted 6 days ago

Apply

5.0 - 10.0 years

11 - 15 Lacs

Hyderabad

Work from Office

The Role. The ideal candidate for this role will be an innovative self-starter. You will be an AI expert with experience in making architectural tradeoffs to transform AI performance for a variety of use cases. You will collaborate with internal and external development engineers (architecture, hardware, validation, software services). You will contribute to development, support device characterization and benchmarking :. Proficiency in Large Models & Deep Neural Networks. Hands-on experience in working with large models & deep neural networks. Expertise in LLMs with working knowledge of large language models (LLMs). Extensive experience in System platform Architecture. Experience in Development Preferable for memory/storage/ any embedded system. In depth knowledge and extensive experience in dealing with Standardizations/Technical Papers/Patents. Extensive experience with C/C++ and Python programming. Develop and fine-tune LLMs (GPT, Llama, Mistral, Falcon, Claude, etc.) for domain-specific applications. Implement RAG pipelines using LlamaIndex, LangGraph, and vector databases (FAISS, Pinecone, Weaviate, ChromaDB) to enhance response accuracy. Build AI-powered chatbots and autonomous agents using LangGraph, CrewAI, LlamaIndex, and OpenAI APIs. Optimize and deploy generative AI models for real-time inference using cloud platforms (AWS, GCP, Azure) and MLOps tools (Docker, Kubernetes, MLflow). Fine-tune models using LoRA, QLoRA, PEFT, and RLHF to improve efficiency and personalization. Develop AI-driven workflows for structured reasoning and decision-making using CrewAI and LangGraph. Integrate multi-modal AI models (text, image, speech) into enterprise solutions. Implement memory and retrieval strategies for LLM-based systems using vector search and caching techniques. Ensure AI models follow ethical AI guidelines, bias mitigation, and security best :. Tech in Computer Science, Electrical Engineering. 8 to 10 Years of experience in relevant domain. Strong analytical & abstract thinking ability as well as technical communication skills. Able to work independently and perform in fast paced environment. Ability to troubleshoot and debug complex issues. Working Knowledge on Device Driver is desirable. Prior experience in working with Skills :. Languages & Frameworks Proficiency in Python, PyTorch, TensorFlow, JAX. (ref:hirist.tech).

Posted 6 days ago

Apply

5.0 years

0 Lacs

India

Remote

We're seeking a Mid-Level Machine Learning Engineer to join our growing Data Science & Engineering team. In this role, you will design, develop, and deploy ML models that power our cutting-edge technologies like voice ordering, prediction algorithms and customer-facing analytics. You'll collaborate closely with data engineers, backend engineers, and product managers to take models from prototyping through to production, continuously improving accuracy, scalability, and maintainability. Essential Job Functions Model Development: Design and build next-generation ML models using advanced tools like PyTorch, Gemini, and Amazon SageMaker - primarily on Google Cloud or AWS platforms Feature Engineering: Build robust feature pipelines; extract, clean, and transform largescale transactional and behavioral data. Engineer features like time- based attributes, aggregated order metrics, categorical encodings (LabelEncoder, frequency encoding) Experimentation & Evaluation: Define metrics, run A/B tests, conduct cross-validation, and analyze model performance to guide iterative improvements. Train and tune regression models (XGBoost, LightGBM, scikit-learn, TensorFlow/Keras) to minimize MAE/RMSE and maximize R² Own the entire modeling lifecycle end-to-end, including feature creation, model development, testing, experimentation, monitoring, explainability, and model maintenance Monitoring & Maintenance: Implement logging, monitoring, and alerting for model drift and data-quality issues; schedule retraining workflows Collaboration & Mentorship: Collaborate closely with data science, engineering, and product teams to define, explore, and implement solutions to open-ended problems that advance the capabilities and applications of Checkmate, mentor junior engineers on best practices in ML engineering Documentation & Communication: Produce clear documentation of model architecture, data schemas, and operational procedures; present findings to technical and non-technical stakeholders Requirements Academics: Bachelors/Master's degree in Computer Science, Engineering, Statistics, or related field Experience: 5+ years of industry experience (or 1+ year post-PhD). Building and deploying advanced machine learning models that drive business impact Proven experience shipping production-grade ML models and optimization systems, including expertise in experimentation and evaluation techniques. Hands-on experience building and maintaining scalable backend systems and ML inference pipelines for real-time or batch prediction Programming & Tools: Proficient in Python and libraries such as pandas, NumPy, scikit-learn; familiarity with TensorFlow or PyTorch. Hands-on with at least one cloud ML platform (AWS SageMaker, Google Vertex AI, or Azure ML). Data Engineering: Hands-on experience with SQL and NoSQL databases; comfortable working with Spark or similar distributed frameworks. Strong foundation in statistics, probability, and ML algorithms like XGBoost/LightGBM; ability to interpret model outputs and optimize for business metrics. Experience with categorical encoding strategies and feature selection. Solid understanding of regression metrics (MAE, RMSE, R²) and hyperparameter tuning. Cloud & DevOps: Proven skills deploying ML solutions in AWS, GCP, or Azure; knowledge of Docker, Kubernetes, and CI/CD pipelines Collaboration: Excellent communication skills; ability to translate complex technical concepts into clear, actionable insights Working Terms: Candidates must be flexible and work during US hours at least until 6 p.m. ET in the USA, which is essential for this role & must also have their own system/work setup for remote work

Posted 6 days ago

Apply

3.0 - 5.0 years

5 - 7 Lacs

Pune

Work from Office

Job Title : AI Engineer - Azure AI. Key Responsibilities : We are seeking a highly skilled and motivated AI Engineer to join our team. In this role, you will leverage your expertise in AI technologies and the Microsoft ecosystem to design, build, and deploy intelligent agents and automation solutions that enhance business processes and deliver value to our clients. Candidates must have extensive experience in the Microsoft environment. You will collaborate with cross-functional teams to create innovative solutions using Microsoft tools and platforms. Responsibilities : Agent Development : - Design, implement, and optimize AI agents using Microsoft Azure Framework and related technologies. - Develop custom AI solutions leveraging Power Automate, Azure OpenAI, and other Microsoft tools. Solution Integration : - Deploy AI solutions within client environments, ensuring scalability and seamless integration with existing systems. - Work with stakeholders to identify automation opportunities and tailor solutions to business needs. AI Algorithm and Model Implementation : - Design and implement machine learning algorithms, focusing on natural language processing (NLP) and conversational AI. - Perform data preprocessing, feature engineering, and model training to create high-performing solutions. Collaboration and Support : - Collaborate with cross-functional teams, including software engineers, data scientists, and product managers, to deliver integrated solutions. - Provide technical guidance and support to ensure the successful adoption and use of AI-driven tools. Continuous Improvement : - Stay updated on advancements in AI, machine learning, and Microsofts AI technologies. - Contribute to knowledge sharing by conducting training sessions and documenting best practices. Preferred Skills : - Strong knowledge of Azure OpenAI, Azure AI Search Index and open source libraries such as LangChain and LlamaIndex. - Proficiency in Python and its AI/ML libraries (e.g., TensorFlow, PyTorch, scikit-learn). - Familiarity with building and managing cloud-based solutions, preferably on Microsoft Azure. - Understanding of conversational AI technologies and chatbot frameworks. - Experience with data analysis tools and techniques to uncover insights and optimize models. Requirements : - Bachelors or Masters degree in Computer Science, Data Science, or a related field. - Proven experience in developing and deploying AI/ML models in real-world applications. - Strong programming skills, especially in Python, and familiarity with version control systems like Git. - Extensive experience in the Microsoft environment and related technologies.

Posted 6 days ago

Apply

3.0 - 8.0 years

10 - 14 Lacs

Hyderabad, Pune

Work from Office

AI/ML Developer - Python1 Position Title : AI/ML Developer - Python No of position 1 Location Pune / Hyderabad : Design machine learning systems Research and implement machine learning algorithms and tools Manage and direct research and development processes to meet the needs of our AI strategy Develop machine learning applications in alignment with project requirements and business goals Perform machine learning tests and statistical analysis in order to fine-tune the machine learning systems Select appropriate datasets and data representation methods Extend existing machine learning libraries and frameworks Train systems and retrain as necessary Work with the engineering and leadership teams on the functional design, process design, prototyping, testing, and training of AI/ML solutions Advise leaders on technology, strategy, and policy issues related to AI/ML Coordinating with development teams to determine application requirements. Writing scalable code using Python programming language. Testing and debugging applications. Developing back-end components. Integrating user-facing elements using server-side logic. Assessing and prioritizing client feature requests. Integrating data storage solutions. Coordinating with front-end developers. Reprogramming existing databases to improve functionality. Developing digital tools to monitor online traffic. Work with the Data Scientists and Data Engineers to create production quality. Machine Learning pipelines and solutions with an emphasis on Performance, Scalability, Reliability, and Maintainability. Build components and libraries that will improve existing solutions and improve the delivery of new ones Required Skills and Experience 3+ years of experience applying AI to practical uses Experience with deep learning, NLP, and TensorFlow Experience writing robust code in Python, Java, and/or R Experience in REST API development, NoSQL database design, and RDBMS design and optimizations Knowledge of basic algorithms and object-oriented and functional design principles Knowledge of data structures, data modeling, and software architecture Knowledge of math, probability, statistics, and algorithms Knowledge of machine learning frameworks Knowledge of machine learning libraries such as scikit-learn Excellent communication skills Strong analytical and problem solving skills Preferred Qualifications Bachelor's degree in a relevant technology field Experience with cloud environments

Posted 6 days ago

Apply

3.0 - 8.0 years

10 - 14 Lacs

Pune

Work from Office

AI/ML Engineers1 AI/ML (Artificial Intelligence/Machine Learning) engineers with 3+ years of experience and having knowledge of Python, Microsoft OpenAI, and GitHub Copilot, along with AWS (Amazon Web Services) cloud experience. Design, implement, and optimize machine learning algorithms using Python to solve complex business problems. Build Prototypes, demos and Gen AI and AI/ML Solutions. Leverage Microsoft OpenAI and GitHub Copilot for enhanced code suggestions and collaboration. Train, evaluate, and fine-tune machine learning models on large datasets using frameworks like TensorFlow or PyTorch. Utilize OpenAI's models for natural language processing tasks and generate code snippets with Copilot to expedite development. Perform data preprocessing tasks, including cleaning, normalization, and feature engineering, to prepare datasets for machine learning models. Use Python libraries and tools to extract meaningful features from raw data. Deploy and manage machine learning models on AWS cloud infrastructure. Utilize AWS services like SageMaker for model training, deployment, and monitoring. Collaborate with cross-functional teams, including data scientists, software engineers, and business stakeholders. Document code, models, and processes to ensure transparency and facilitate knowledge transfer. Implement CI/CD pipelines to automate the testing, deployment, and monitoring of machine learning models. Use GitHub Copilot to enhance code quality and efficiency in the development lifecycle. Monitor the performance of deployed models and optimize them for scalability, speed, and accuracy. Implement best practices for model versioning and management. Ensure that machine learning solutions adhere to security standards and compliance requirements. Implement secure coding practices and data protection measures. Stay updated on the latest advancements in AI/ML, Python, OpenAI, and cloud technologies. Experiment with new models, techniques, and tools to drive innovation within the team. Troubleshoot and debug issues related to machine learning models, Python code, and cloud infrastructure. Collaborate with support teams to address production issues promptly. Provide guidance and training to team members on AI/ML best practices, Python coding standards, and effective use of OpenAI and Copilot. Conduct workshops or knowledge-sharing sessions within the organization. Optimize cloud resources to ensure cost-effectiveness in model training and deployment on AWS. Proficiency in python, Gen AI, Microsoft OpenAI, and GitHub Copilot, along with AWS Strong knowledge of Python, TensorFlow and PyTorch Strong knowledge of NLPs like Microsoft OpenAI and Copilot Knowledge of cloud platforms like Amazon Web Services. Solid understanding of SQL and data manipulation techniques for data extraction, transformation, and loading. Ability to understand business requirements and translate them into meaningful solutions. Strong analytical and problem-solving skills. Excellent communication and collaboration skills to work effectively with business users, data analysts, and other stakeholders. Educational Qualifications Bachelor's degree in computer science, software engineering, mathematics, or a related field is often the minimum requirement Knowledge of Data science will be added advantage .

Posted 6 days ago

Apply

2.0 - 7.0 years

13 - 17 Lacs

Chennai

Work from Office

Job Area: Engineering Group, Engineering Group > Software Engineering General Summary: As a leading technology innovator, Qualcomm pushes the boundaries of what's possible to enable next-generation experiences and drives digital transformation to help create a smarter, connected future for all. As a Qualcomm Software Engineer, you will design, develop, create, modify, and validate embedded and cloud edge software, applications, and/or specialized utility programs that launch cutting-edge, world class products that meet and exceed customer needs. Qualcomm Software Engineers collaborate with systems, hardware, architecture, test engineers, and other teams to design system-level software solutions and obtain information on performance requirements and interfaces. Minimum Qualifications: Bachelor's degree in Engineering, Information Systems, Computer Science, or related field and 2+ years of Software Engineering or related work experience. OR Master's degree in Engineering, Information Systems, Computer Science, or related field and 1+ year of Software Engineering or related work experience. OR PhD in Engineering, Information Systems, Computer Science, or related field. 2+ years of academic or work experience with Programming Language such as C, C++, Java, Python, etc. Job Title: MLOps Engineer - ML Platform Hiring Title: Flexible based on candidate experience – about Staff Engineer preferred : We are seeking a highly skilled and experienced MLOps Engineer to join our team and contribute to the development and maintenance of our ML platform both on premises and AWS Cloud. As a MLOps Engineer, you will be responsible for architecting, deploying, and optimizing the ML & Data platform that supports training of Machine Learning Models using NVIDIA DGX clusters and the Kubernetes platform, including technologies like Helm, ArgoCD, Argo Workflow, Prometheus, and Grafana. Your expertise in AWS services such as EKS, EC2, VPC, IAM, S3, and EFS will be crucial in ensuring the smooth operation and scalability of our ML infrastructure. You will work closely with cross-functional teams, including data scientists, software engineers, and infrastructure specialists, to ensure the smooth operation and scalability of our ML infrastructure. Your expertise in MLOps, DevOps, and knowledge of GPU clusters will be vital in enabling efficient training and deployment of ML models. Responsibilities will include: Architect, develop, and maintain the ML platform to support training and inference of ML models. Design and implement scalable and reliable infrastructure solutions for NVIDIA clusters both on premises and AWS Cloud. Collaborate with data scientists and software engineers to define requirements and ensure seamless integration of ML and Data workflows into the platform. Optimize the platform’s performance and scalability, considering factors such as GPU resource utilization, data ingestion, model training, and deployment. Monitor and troubleshoot system performance, identifying and resolving issues to ensure the availability and reliability of the ML platform. Implement and maintain CI/CD pipelines for automated model training, evaluation, and deployment using technologies like ArgoCD and Argo Workflow. Implement and maintain monitoring stack using Prometheus and Grafana to ensure the health and performance of the platform. Manage AWS services including EKS, EC2, VPC, IAM, S3, and EFS to support the platform. Implement logging and monitoring solutions using AWS CloudWatch and other relevant tools. Stay updated with the latest advancements in MLOps, distributed computing, and GPU acceleration technologies, and proactively propose improvements to enhance the ML platform. What are we looking for: Bachelor’s or Master’s degree in Computer Science, Engineering, or a related field. Proven experience as an MLOps Engineer or similar role, with a focus on large-scale ML and/or Data infrastructure and GPU clusters. Strong expertise in configuring and optimizing NVIDIA DGX clusters for deep learning workloads. Proficient in using the Kubernetes platform, including technologies like Helm, ArgoCD, Argo Workflow, Prometheus , and Grafana . Solid programming skills in languages like Python, Go and experience with relevant ML frameworks (e.g., TensorFlow, PyTorch ). In-depth understanding of distributed computing, parallel computing, and GPU acceleration techniques. Familiarity with containerization technologies such as Docker and orchestration tools. Experience with CI/CD pipelines and automation tools for ML workflows (e.g., Jenkins, GitHub, ArgoCD). Experience with AWS services such as EKS , EC2, VPC, IAM, S3, and EFS. Experience with AWS logging and monitoring tools. Strong problem-solving skills and the ability to troubleshoot complex technical issues. Excellent communication and collaboration skills to work effectively within a cross-functional team. We would love to see: Experience with training and deploying models. Knowledge of ML model optimization techniques and memory management on GPUs. Familiarity with ML-specific data storage and retrieval systems. Understanding of security and compliance requirements in ML infrastructure.

Posted 6 days ago

Apply

8.0 - 13.0 years

14 - 18 Lacs

Bengaluru

Work from Office

Job Area: Engineering Group, Engineering Group > Machine Learning Engineering General Summary: Job Overview: Qualcomm is a company of inventors that unlocked 5G - ushering in an age of rapid acceleration in connectivity and new possibilities that will transform industries, create jobs, and enrich lives. But this is just the beginning. It takes inventive minds with diverse skills, backgrounds, and cultures to transform 5Gs potential into world-changing technologies and products. This is the Invention Age - and this is where you come in. We are investing in several supporting technologies including 4G, 5G, Edge Computing, and Deep Learning.The Qualcomm AI team is developing hardware and software for Machine Learning solutions spanning the data center, edge, infrastructure, automotive markets and beyond. We are seeking ambitious, bright and innovative engineers with experience in Machine learning frameworks, compiler technology, vectorization and optimization, and machine learning toolchains.Job activities span the whole product life cycle from early design to commercial deployment. The environment is fast-paced and requires cross-functional interaction on a daily basis so good communication, planning and execution skills are a must. We are looking to staff engineers at multiple levels in systems & software, integration and test. Details of one of the roles we are looking to staff are listed below. Responsibilities: Research, design, develop, enhance, and implement the different components of machine learning framework, compilers based on performance and code-size needs of the customer workloads and benchmarks. Analyze software requirements, determine the feasibility of design within the given constraints, consult with architecture and HW engineers, and implement software solutions best suited for Qualcomm's SOCs. Analyze and identify system level integration issues, interface with the software development, integration and test teams. Minimum Qualifications Bachelor's degree in Engineering, Information Systems, Computer Science, or related field. 8+ years Systems Engineering or related work experience. Preferred Qualifications Has internal working knowledge of Machine learning frameworks like Pytorch, Tensorflow. Has experience in model level optimization using techniques like torch compile. LLVM or any industrial strength compiler development experience is a plus. Knowledge of the structure and function of the compiler internals. Hands on experience writing SIMD and/or multi-threaded high-performance code is a plus. Hands-on experience implementing DSP Kernels a plus Hands-on Experience in C/C++, Python development (5+ years) Hands-on Experience with Object Orientated Design, TDD development solutions such as GoogleTest etc. (4+ years) Experience with Source Code and Configuration management tools, git knowledge is required Willingness to work in a cohesive software development environment with ability to work on low level implementation (code & test) and interfacing with hardware and simulators Experience in neural network architectures + ML compiler workload synthesis, a plus Prior working experience of hardware accelerators and hardware software co-design Experience in using C++ 14/17 (advanced features) Experience at both the firmware (RTOS) and system level (Linux) in SOC Experience of profiling software and optimization techniques Passion to drive to develop leading-edge "deep learning" framework and algorithms working on mobile and embedded platforms. Minimum Qualifications: Bachelor's degree in Computer Science, Engineering, Information Systems, or related field and 4+ years of Hardware Engineering, Software Engineering, Systems Engineering, or related work experience. OR Master's degree in Computer Science, Engineering, Information Systems, or related field and 3+ years of Hardware Engineering, Software Engineering, Systems Engineering, or related work experience. OR PhD in Computer Science, Engineering, Information Systems, or related field and 2+ years of Hardware Engineering, Software Engineering, Systems Engineering, or related work experience.

Posted 6 days ago

Apply

1.0 - 5.0 years

12 - 16 Lacs

Chennai

Work from Office

Job Area: Engineering Group, Engineering Group > Software Engineering General Summary: As a leading technology innovator, Qualcomm pushes the boundaries of what's possible to enable next-generation experiences and drives digital transformation to help create a smarter, connected future for all. As a Qualcomm Software Engineer, you will design, develop, create, modify, and validate embedded and cloud edge software, applications, and/or specialized utility programs that launch cutting-edge, world class products that meet and exceed customer needs. Qualcomm Software Engineers collaborate with systems, hardware, architecture, test engineers, and other teams to design system-level software solutions and obtain information on performance requirements and interfaces. Minimum Qualifications: Bachelor's degree in Engineering, Information Systems, Computer Science, or related field. Job Title: MLOps Engineer - ML Platform Hiring Title: Flexible based on candidate experience – about Staff Engineer preferred : We are seeking a highly skilled and experienced MLOps Engineer to join our team and contribute to the development and maintenance of our ML platform both on premises and AWS Cloud. As a MLOps Engineer, you will be responsible for architecting, deploying, and optimizing the ML & Data platform that supports training of Machine Learning Models using NVIDIA DGX clusters and the Kubernetes platform, including technologies like Helm, ArgoCD, Argo Workflow, Prometheus, and Grafana. Your expertise in AWS services such as EKS, EC2, VPC, IAM, S3, and EFS will be crucial in ensuring the smooth operation and scalability of our ML infrastructure. You will work closely with cross-functional teams, including data scientists, software engineers, and infrastructure specialists, to ensure the smooth operation and scalability of our ML infrastructure. Your expertise in MLOps, DevOps, and knowledge of GPU clusters will be vital in enabling efficient training and deployment of ML models. Responsibilities will include: Architect, develop, and maintain the ML platform to support training and inference of ML models. Design and implement scalable and reliable infrastructure solutions for NVIDIA clusters both on premises and AWS Cloud. Collaborate with data scientists and software engineers to define requirements and ensure seamless integration of ML and Data workflows into the platform. Optimize the platform’s performance and scalability, considering factors such as GPU resource utilization, data ingestion, model training, and deployment. Monitor and troubleshoot system performance, identifying and resolving issues to ensure the availability and reliability of the ML platform. Implement and maintain CI/CD pipelines for automated model training, evaluation, and deployment using technologies like ArgoCD and Argo Workflow. Implement and maintain monitoring stack using Prometheus and Grafana to ensure the health and performance of the platform. Manage AWS services including EKS, EC2, VPC, IAM, S3, and EFS to support the platform. Implement logging and monitoring solutions using AWS CloudWatch and other relevant tools. Stay updated with the latest advancements in MLOps, distributed computing, and GPU acceleration technologies, and proactively propose improvements to enhance the ML platform. What are we looking for: Bachelor’s or Master’s degree in Computer Science, Engineering, or a related field. Proven experience as an MLOps Engineer or similar role, with a focus on large-scale ML and/or Data infrastructure and GPU clusters. Strong expertise in configuring and optimizing NVIDIA DGX clusters for deep learning workloads. Proficient in using the Kubernetes platform, including technologies like Helm, ArgoCD, Argo Workflow, Prometheus , and Grafana . Solid programming skills in languages like Python, Go and experience with relevant ML frameworks (e.g., TensorFlow, PyTorch ). In-depth understanding of distributed computing, parallel computing, and GPU acceleration techniques. Familiarity with containerization technologies such as Docker and orchestration tools. Experience with CI/CD pipelines and automation tools for ML workflows (e.g., Jenkins, GitHub, ArgoCD). Experience with AWS services such as EKS , EC2, VPC, IAM, S3, and EFS. Experience with AWS logging and monitoring tools. Strong problem-solving skills and the ability to troubleshoot complex technical issues. Excellent communication and collaboration skills to work effectively within a cross-functional team. We would love to see: Experience with training and deploying models. Knowledge of ML model optimization techniques and memory management on GPUs. Familiarity with ML-specific data storage and retrieval systems. Understanding of security and compliance requirements in ML infrastructure.

Posted 6 days ago

Apply

4.0 - 9.0 years

12 - 17 Lacs

Chennai

Work from Office

Job Area: Engineering Group, Engineering Group > Software Engineering General Summary: As a leading technology innovator, Qualcomm pushes the boundaries of what's possible to enable next-generation experiences and drives digital transformation to help create a smarter, connected future for all. As a Qualcomm Software Engineer, you will design, develop, create, modify, and validate embedded and cloud edge software, applications, and/or specialized utility programs that launch cutting-edge, world class products that meet and exceed customer needs. Qualcomm Software Engineers collaborate with systems, hardware, architecture, test engineers, and other teams to design system-level software solutions and obtain information on performance requirements and interfaces. Minimum Qualifications: Bachelor's degree in Engineering, Information Systems, Computer Science, or related field and 4+ years of Software Engineering or related work experience. OR Master's degree in Engineering, Information Systems, Computer Science, or related field and 3+ years of Software Engineering or related work experience. OR PhD in Engineering, Information Systems, Computer Science, or related field and 2+ years of Software Engineering or related work experience. 2+ years of work experience with Programming Language such as C, C++, Java, Python, etc. Job Title: MLOps Engineer - ML Platform Hiring Title: Flexible based on candidate experience – about Staff Engineer preferred : We are seeking a highly skilled and experienced MLOps Engineer to join our team and contribute to the development and maintenance of our ML platform both on premises and AWS Cloud. As a MLOps Engineer, you will be responsible for architecting, deploying, and optimizing the ML & Data platform that supports training of Machine Learning Models using NVIDIA DGX clusters and the Kubernetes platform, including technologies like Helm, ArgoCD, Argo Workflow, Prometheus, and Grafana. Your expertise in AWS services such as EKS, EC2, VPC, IAM, S3, and EFS will be crucial in ensuring the smooth operation and scalability of our ML infrastructure. You will work closely with cross-functional teams, including data scientists, software engineers, and infrastructure specialists, to ensure the smooth operation and scalability of our ML infrastructure. Your expertise in MLOps, DevOps, and knowledge of GPU clusters will be vital in enabling efficient training and deployment of ML models. Responsibilities will include: Architect, develop, and maintain the ML platform to support training and inference of ML models. Design and implement scalable and reliable infrastructure solutions for NVIDIA clusters both on premises and AWS Cloud. Collaborate with data scientists and software engineers to define requirements and ensure seamless integration of ML and Data workflows into the platform. Optimize the platform’s performance and scalability, considering factors such as GPU resource utilization, data ingestion, model training, and deployment. Monitor and troubleshoot system performance, identifying and resolving issues to ensure the availability and reliability of the ML platform. Implement and maintain CI/CD pipelines for automated model training, evaluation, and deployment using technologies like ArgoCD and Argo Workflow. Implement and maintain monitoring stack using Prometheus and Grafana to ensure the health and performance of the platform. Manage AWS services including EKS, EC2, VPC, IAM, S3, and EFS to support the platform. Implement logging and monitoring solutions using AWS CloudWatch and other relevant tools. Stay updated with the latest advancements in MLOps, distributed computing, and GPU acceleration technologies, and proactively propose improvements to enhance the ML platform. What are we looking for: Bachelor’s or Master’s degree in Computer Science, Engineering, or a related field. Proven experience as an MLOps Engineer or similar role, with a focus on large-scale ML and/or Data infrastructure and GPU clusters. Strong expertise in configuring and optimizing NVIDIA DGX clusters for deep learning workloads. Proficient in using the Kubernetes platform, including technologies like Helm, ArgoCD, Argo Workflow, Prometheus , and Grafana . Solid programming skills in languages like Python, Go and experience with relevant ML frameworks (e.g., TensorFlow, PyTorch ). In-depth understanding of distributed computing, parallel computing, and GPU acceleration techniques. Familiarity with containerization technologies such as Docker and orchestration tools. Experience with CI/CD pipelines and automation tools for ML workflows (e.g., Jenkins, GitHub, ArgoCD). Experience with AWS services such as EKS , EC2, VPC, IAM, S3, and EFS. Experience with AWS logging and monitoring tools. Strong problem-solving skills and the ability to troubleshoot complex technical issues. Excellent communication and collaboration skills to work effectively within a cross-functional team. We would love to see: Experience with training and deploying models. Knowledge of ML model optimization techniques and memory management on GPUs. Familiarity with ML-specific data storage and retrieval systems. Understanding of security and compliance requirements in ML infrastructure.

Posted 6 days ago

Apply

8.0 - 13.0 years

15 - 19 Lacs

Bengaluru

Work from Office

Job Area: Engineering Group, Engineering Group > Machine Learning Engineering General Summary: Job Overview Qualcomm is a company of inventors that unlocked 5G - ushering in an age of rapid acceleration in connectivity and new possibilities that will transform industries, create jobs, and enrich lives. But this is just the beginning. It takes inventive minds with diverse skills, backgrounds, and cultures to transform 5Gs potential into world-changing technologies and products. This is the Invention Age - and this is where you come in. We are investing in several supporting technologies including 4G, 5G, Edge Computing, and Deep Learning. The Qualcomm AI team is developing hardware and software for Machine Learning solutions spanning the data center, edge, infrastructure, automotive markets and beyond. We are seeking ambitious, bright and innovative engineers with experience in Machine learning frameworks, compiler technology, vectorization and optimization, and machine learning toolchains. Job activities span the whole product life cycle from early design to commercial deployment. The environment is fast-paced and requires cross-functional interaction on a daily basis so good communication, planning and execution skills are a must. We are looking to staff engineers at multiple levels in systems & software, integration and test. Details of one of the roles we are looking to staff are listed below. Responsibilities Research, design, develop, enhance, and implement the different components of machine learning framework, compilers based on performance and code-size needs of the customer workloads and benchmarks. Analyze software requirements, determine the feasibility of design within the given constraints, consult with architecture and HW engineers, and implement software solutions best suited for Qualcomm's SOCs. Analyze and identify system level integration issues, interface with the software development, integration and test teams. Minimum Qualifications Bachelor's degree in Engineering, Information Systems, Computer Science, or related field. 8+ years Systems Engineering or related work experience. Preferred Qualifications Has internal working knowledge of Machine learning frameworks like Pytorch, Tensorflow. Has experience in model level optimization using techniques like torch compile. LLVM or any industrial strength compiler development experience is a plus. Knowledge of the structure and function of the compiler internals. Hands on experience writing SIMD and/or multi-threaded high-performance code is a plus. Hands-on experience implementing DSP Kernels a plus Hands-on Experience in C/C++, Python development (5+ years) Hands-on Experience with Object Orientated Design, TDD development solutions such as GoogleTest etc. (4+ years) Experience with Source Code and Configuration management tools, git knowledge is required Willingness to work in a cohesive software development environment with ability to work on low level implementation (code & test) and interfacing with hardware and simulators Experience in neural network architectures + ML compiler workload synthesis, a plus Prior working experience of hardware accelerators and hardware software co-design Experience in using C++ 14/17 (advanced features) Experience at both the firmware (RTOS) and system level (Linux) in SOC Experience of profiling software and optimization techniques Passion to drive to develop leading-edge "deep learning" framework and algorithms working on mobile and embedded platforms. Minimum Qualifications: Bachelor's degree in Computer Science, Engineering, Information Systems, or related field and 6+ years of Hardware Engineering, Software Engineering, Systems Engineering, or related work experience. OR Master's degree in Computer Science, Engineering, Information Systems, or related field and 5+ years of Hardware Engineering, Software Engineering, Systems Engineering, or related work experience. OR PhD in Computer Science, Engineering, Information Systems, or related field and 4+ years of Hardware Engineering, Software Engineering, Systems Engineering, or related work experience.

Posted 6 days ago

Apply

5.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Title: Data Scientist Primary Location: Gurgaon(Onsite) Job Type: Contract Secondary Locations: Chennai / Pune / Bangalore / Noida Job Description Bachelor’s or Master’s Degree in Data Science, Computer Science, Artificial Intelligence, or a related field. • Proficiency in Programming Languages such as Python, SQL, and other relevant languages. • Experience with AI/ML Frameworks like TensorFlow, PyTorch, or similar. • Knowledge of Data Mapping Techniques: Experience with AI-driven tools and methodologies for data mapping.• Strong Code Generation Skills: Experience in developing AI models for automating code generation. • Experience with Data Validation: Implementing AI-based validation techniques. • Familiarity with Code Conversion: Understanding AI algorithms for code conversion. • Experience with Databases like SQL Server and MongoDB. • Collaboration Skills: Ability to work effectively with cross-functional teams. • Problem-Solving Skills: Strong ability to identify issues and develop creative solutions. • Attention to Detail: Ensure accuracy and reliability of data conversions. • 5+ Years of Relevant Experience in data science or a related field. • Willingness to Fail Fast and learn from mistakes in the fast-paced technology market. Preferred qualifications for consideration: • Experience in the Financial Services Industry and an understanding of compliance standards. • Certification in Data Science or AI/ML. • Experience with Master data management, Data Wrangling and ETL Processes. • Familiarity with DevOps Tools like Jira, Confluence, and BitBucket. • Experience with data and AI/ML Technologies: Such as NLP/NLU, Azure Cognitive Services, Azure Synapse Analytics, Azure data bricks and Azure ML service. • Previous Experience Delivering AI Solutions for complex data or conversions: Seamless Data Schema Conversion, AI-Driven Data Validation for Migration Accuracy, Intelligent Code Generation for Data Transformation Scripts , Historical Data Transformation and Archiving, Intelligent Error Detection and Correction, AI-Augmented Data Reconciliation Thanks Aatmesh aatmesh.singh@ampstek.com

Posted 6 days ago

Apply

15.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Description Ciklum is looking for a Presales Solutions Architect to join our team full-time in India. We are a custom product engineering company that supports both multinational organizations and scaling startups to solve their most complex business challenges. With a global team of over 4,000 highly skilled developers, consultants, analysts and product owners, we engineer technology that redefines industries and shapes the way people live. About The Role We are seeking a highly skilled and customer-focused Presales Solutions Architect with expertise across AI-driven technologies, cloud platforms, and modern application architecture. You will work closely with the sales, product, and engineering teams to design innovative, scalable, and value- driven solutions for our prospects and clients—combining technical acumen with strong business understanding. Responsibilities Solution Design & Technical Leadership: Understand client business goals and translate them into robust, scalable, and secure technical architectures Design and present end-to-end solutions incorporating AI/ML, cloud services, and modern full-stack technologies Serve as a technical advisor across the sales cycle—discovery, evaluation, proposal, and closing AI & Generative AI Expertise: Advise customers on the application of AI/ML, Generative AI, and AI-driven engineering practices (e.g., intelligent code generation, AI-assisted QA, AI Ops) Architect solutions using LLMs, vector databases, RAG pipelines, prompt engineering, and LLMOps practices Showcase capabilities through compelling PoCs, demos, and technical workshops Cloud & Architecture Knowledge: Leverage your knowledge of cloud platforms (AWS, Azure, GCP) to design secure and scalable cloud-native architectures Apply principles of microservices, API-first design, containerization (Docker, Kubernetes), and CI/CD pipelines to meet modern software engineering standards Understand integration of front-end (React/Angular) and back-end (Node.js, Python, Java, etc.) components in enterprise environments Engagement & Enablement: Collaborate with Product, Engineering, and Customer Success teams to ensure alignment between client needs and platform capabilities Support RFP/RFI responses, solution diagrams, and technical documentation Provide technical enablement and thought leadership to clients and internal stakeholders on architecture and AI topics Requirements Bachelor’s or Master’s in Computer Science, Software Engineering, Data Science, or related field 15 years in Technology with minimum 5+ years in a client-facing solutions role (presales, architecture, consulting, or engineering) Excellent communication, presentation, and storytelling skills—comfortable with C-level and technical stakeholders Strong working knowledge of: Cloud computing (AWS, GCP, Azure – certifications are a plus) AI/ML concepts and tools (LLMs, GenAI, TensorFlow, PyTorch, Hugging Face, LangChain, etc.) AI/ML model deployment, vector search (e.g., Pinecone, Weaviate), and RAG frameworks Application architecture: REST APIs, microservices, containers, serverless Full-stack development: JavaScript frameworks, backend APIs, data pipelines Experience with DevOps tools, CI/CD, observability, and infra-as-code Proven experience in delivering technical demos, PoCs, and architecture proposals to enterprise clients Knowledge of AI governance, privacy, and ethical AI implementation Ability to create architecture diagrams and documents using tools like Lucidchart, Draw.io, Miro, or Visio What's in it for you Care: your mental and physical health is our priority. We ensure comprehensive company-paid medical insurance, as well as financial and legal consultation Tailored education path: boost your skills and knowledge with our regular internal events (meetups, conferences, workshops), Udemy licence, language courses and company-paid certifications Growth environment: share your experience and level up your expertise with a community of skilled professionals, locally and globally Flexibility: hybrid work mode at Chennai or Pune Opportunities: we value our specialists and always find the best options for them. Our Resourcing Team helps change a project if needed to help you grow, excel professionally and fulfil your potential Global impact: work on large-scale projects that redefine industries with international and fast-growing clients Welcoming environment: feel empowered with a friendly team, open-door policy, informal atmosphere within the company and regular team-building events About Us At Ciklum, we are always exploring innovations, empowering each other to achieve more, and engineering solutions that matter. With us, you’ll work with cutting-edge technologies, contribute to impactful projects, and be part of a One Team culture that values collaboration and progress. India is a strategic innovation hub for Ciklum, with growing teams in Chennai and Pune leading advancements in EdgeTech, AR/VR, IoT, and beyond. Join us to collaborate on game-changing solutions and take your career to the next level. Want to learn more about us? Follow us on Instagram, Facebook, LinkedIn . Explore, empower, engineer with Ciklum! Experiences of tomorrow. Engineered together Interested already? We would love to get to know you! Submit your application. Can’t wait to see you at Ciklum. Apply

Posted 6 days ago

Apply

2.0 - 5.0 years

9 - 13 Lacs

Hosur

Work from Office

Job Job TitleGraduate Engineer Trainee Company NameTitan Job TypeRP Job CategoryCentral Technology Services DepartmentCentral Technology Services LocationHosur, Tamil Nadu, India : Titan, a leading company in the Central Technology Services industry, is seeking a highly motivated and talented Graduate Engineer Trainee to join our team in Hosur, Tamil Nadu, India. As a Graduate Engineer Trainee, you will have the opportunity to work with cutting-edge technology and gain valuable experience in the field of Central Technology Services. Key Responsibilities: - Participate in training programs to learn about the company's products, services, and processes - Work closely with senior engineers to assist in the design, development, and implementation of new technology solutions - Conduct research and analysis to identify areas for improvement and make recommendations for process enhancements - Collaborate with cross-functional teams to support project initiatives and ensure timely delivery of projects - Assist in troubleshooting and resolving technical issues as needed - Stay updated on industry trends and advancements in technology to contribute innovative ideas to the team Qualifications: - Bachelor's degree in Engineering or a related field - Strong academic record with a minimum GPA of 3.0 - Excellent problem-solving and analytical skills - Proficiency in programming languages such as Java, C++, or Python - Familiarity with database management and SQL - Ability to work independently and in a team environment - Excellent communication and interpersonal skills - Willingness to learn and adapt to new technologies and processes Additional Parameters: - Previous internship or work experience in the Central Technology Services industry is a plus - Knowledge of Agile methodology is preferred - Willingness to relocate for future career opportunities within the company At Titan, we value diversity and equal opportunity in the workplace. We are committed to creating an inclusive environment for all employees and welcome applicants of all backgrounds to apply. Join our team and be a part of our mission to revolutionize the Central Technology Services industry. Work Experience Good knowledge in Linear Algebra, Vectors, Calculus, Statistics, Fourier Analysis Good knowledge in Signal processing Techniques like Wavelets, Shearlets, Curvelets, Bandlets etc., Good knowledge in Applied Mathematics like Gradient Descent, Convex Optimization, Lagrange, Quadratic Programming, Partial Differential Equations, and Summations Good Knowledge in machine learning algorithms, neural networks, natural language processing, and related AI technologies. Proficiency in programming languages such as C++, Python, R, Java, and experience with AI/ML frameworks (e.g., TensorFlow, PyTorch, scikit-learn). Knowledge in data preprocessing, feature engineering, and model development and evaluation. Strong analytical and problem-solving skills, with the ability to analyze complex datasets and derive actionable insights. Good Communication & Inter Personnel Skills Major Deliverables / Tasks : 1. Discovering New Possibilities - The role should deliver the below towards new Possibilities : - New AI/ML techniques, Tools / Models / algorithms for Industry 4.0 and Smart wearables - Use-cases based on emerging trends - Data Models with Reports for efficient Algorithms - Prototype Testing and Validation 2. Algorithm/ Code Development - The role should perform the below towards meeting the above objectives: - Conduct analysis to identify patterns, trends, and insights from large and complex datasets. - Collaborate with data engineers and data scientists to ensure data quality, integrity, and accessibility for AI and ML initiatives. - Monitor and evaluate AI and ML performance metrics, providing recommendations for improvements and optimization. - Develop Standard Plug-ins and Models for easy integration 3 . Reports and Documentation The role is expected to meticulously Compile all the support Details and Documents hailing from the core activities mentioned above and deliver the Blueprint of the Development Process

Posted 6 days ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies