About Iamneo
Founded in 2016 and now part of the NIIT family, iamneo is a fast-growing, profitable B2B EdTech SaaS company redefining how tech talent is upskilled, assessed, and deployed.Our AI-powered learning and assessment platforms empower enterprises and educational institutions to build future-ready talent at scale. We work with leading corporates like
LTIMindtree, Virtusa, Tech Mahindra, and Hexaware
, and partner with 150+ institutions including
VIT, SRM, LPU, Sri Krishna Institutions, and Manipal
.As part of
NIIT’s 40+ year legacy in learning and talent development
, we bring together deep global expertise with an AI-first, product-driven approach to modern workforce transformation.If you’re passionate about
storytelling, building relationships, and driving on-ground impact
, this role is for you.
About Role
We are seeking a skilled
Python Content Developer
with strong expertise in
Artificial Intelligence (AI) and Machine Learning (ML)
to design and create high-quality learning materials, tutorials, projects, and assessments. You will translate complex AI/ML concepts into engaging, easy-to-understand content for learners ranging from beginners to advanced professionals. This role involves working with distributed systems, cloud platforms, and modern data frameworks to support real-time and batch data pipelines. The above-mentioned skillsets or roles used for creating content and labs.
Key Responsibilities
- Work with Python, Apache Spark, Hadoop, and Kafka to build efficient data processing solutions.
- Keep material current with the latest Python releases, AI/ML frameworks, and industry best practices.
- Implement data lakes, data warehouses, and streaming architectures.
- Optimize database and query performance for large-scale datasets.
- Collaborate with SMEs, Clients, and software engineers to deliver content.
- Ensure data security, governance, and compliance with industry standards.
- Automate workflows using Apache Airflow or other orchestration tools.
- Monitor and troubleshoot data pipelines to ensure reliability and scalability.
Required Qualifications
- Minimum educational qualifications: B.E., Bsc, Msc, MCA
- Experience requirements
- Proficiency in Python, Java, or Scala for data processing.
- Familiarity with machine learning pipelines and AI-driven analytics.
- Hands-on experience with Apache Spark, Hadoop, Kafka, Flink, Storm.
- Hands-on experience on working with SQL and NoSQL databases.
- Strong expertise in cloud-based data solutions (AWS / Google / Azure).
- Hands-on experience in building and managing ETL/ELT pipelines.
- Knowledge of containerization and orchestration Docker or K8S.
- Hands-on of real-time data streaming and serverless data processing.
- Strong understanding of CI/CD & ETL pipelines for data workflows.
Technical Skills
- Big Data Technologies: Apache Spark, Hadoop, Kafka, Flink, Storm
- Cloud Platforms: AWS / Google / Azure
- Programming Languages: Python, Java, Scala, SQL, PySpark
- Data Storage & Processing: Data Lakes, Warehouses, ETL/ELT Pipelines
- Orchestration: Apache Airflow, Prefect, Dagster
- Databases: SQL (PostgreSQL, MySQL), NoSQL (MongoDB, Cassandra)
- Security & Compliance: IAM, Data Governance, Encryption
- DevOps Tools: Docker, Kubernetes, Terraform, CI/CD Pipelines
Soft Skills
- Strong problem-solving and analytical skills
- Excellent communication and collaboration abilities
- Ability to work in an agile, fast-paced environment
- Attention to detail and data accuracy
- Self-motivated and proactive
Skills: machine learning,ml,artificial intelligence,python