Posted:2 weeks ago|
Platform:
Remote
Contractual
OpenTrain AI assembles expert engineering teams that help top AI companies refine, test, and scale their models. We specialize in high-accuracy data labeling, quality assurance, and evaluation workflows—delivered entirely remotely.
We’re looking for a seasoned Python professional to audit evaluations of AI-generated code. You will review annotator assessments, verify code correctness and security, and provide concise feedback that directly improves large-language-model (LLM) performance.
OpenTrain AI
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Practice Python coding challenges to boost your skills
Start Practicing Python NowSalary: Not disclosed
Salary: Not disclosed