RESPONSIBILITIES:
- Assist in testing and validating AI/LLM models and Generative AI systems for accuracy, reliability, and consistency.
- Support the development of test plans, test cases, and automated scripts for both AI modules and backend services.
- Perform functional, regression, and integration testing across APIs and system components.
- Execute data quality checks, preprocessing validation, and output verification for AI pipelines and backend data flows.
- Collaborate with engineers to identify, reproduce, and document issues, providing structured feedback on system and model performance.
- Track and manage bugs using issue tracking tools; work with developers to ensure timely fixes and re-validation.
- Contribute to defining QA standards, documentation practices, and release validation procedures.
REQUIREMENTS:
- Strong foundation in Python; familiarity with TypeScript, API testing, or backend validation tools is a plus.
- Understanding of QA fundamentals — test case design, defect tracking, regression testing, and test documentation.
- Interest in Generative AI, LLMs, and NLP systems.
- Basic understanding of AI model evaluation, data validation, or semantic search concepts.
- Familiarity with test automation, data pipelines, or continuous integration environments is advantageous.
- Eagerness to learn new technologies and testing methodologies, with a strong attention to detail, excellent analytical, communication, and documentation skills.
QUALIFICATIONS:
- Currently pursuing a Bachelor’s degree in Computer Science, Artificial Intelligence, Data Science, or related fields.
- Demonstrated interest in AI/ML projects, coursework, or personal projects.
- No prior professional experience required; previous internships or QA/testing experience in AI/ML is a plus.
Generating Apply Link...



