AI - Quality Engineer

India, KA, Bangalore
Vollzeitbeschäftigung

Wichtige Stelleninformationen
Location:
India, KA, Bangalore
Job type:
Vollzeitbeschäftigung
Product:
ENOVIA
Experience level:
0 to 3 years
Posted on:
5/7/2026
Ref ID:
547120

AI - Quality Engineer  (ENOVIA)

For ENOVIA, our brand promise is Reinvent Your Definition of Success. Powered by the 3DEXPERIENCE platform, ENOVIA helps you deliver transformative innovations.

The Generative Economy is reshaping how industries invent, learn and create value. ENOVIA enables this movement by helping redefine what’s possible in Product Lifecycle Management (PLM). With ENOVIA, organizations tap into intelligent, AI-driven workflows that accelerate design exploration, enhance simulation fidelity and streamline decision-making. Powered by the strength of 3D UNIV+RSES, this new approach combines virtual twins, generative models and trusted data to deliver richer insights and greater agility throughout the product lifecycle and across business operations.

ENOVIA provides a secure, cloud-native foundation where teams collaborate seamlessly and innovation scales effortlessly. By combining the real-time exchange of knowledge and know-how with generative intelligence, ENOVIA helps businesses optimize processes, reduce waste and unlock new levels of creativity—from early concept to manufacturing and beyond.

ROLE DESCRIPTION AND RESPONSIBILITIES

  • Stay at the state of the art on software testing practices with a focus on AI/ML/LLM evaluation (offline metrics, human in the loop reviews, A/B, red teaming).

  • Anticipate AI specific risks (hallucinations, prompt/guardrail bypass, data leakage, drift) and propose measu          rable quality criteria and release gates.

  • Review specifications end to end:

    • Validate functional specs for completeness and testability.

    • Validate data, ingestion pipelines, retrieval (RAG), prompts/guardrails, and model specifications (intended use, limits, metrics, licensing) when needed.

  • Define the test strategy of the global AI-powered solution — what/how/when/depth for:

    • Functional & non functional tests (latency, throughput, cost),

    • Safety/Responsible AI (RAI) (toxicity, bias/fairness, privacy),

    • Security (jailbreak resistance, Personally Identifiable Information (PII) redaction) using Security guidance and tooling,

    • Performance (golden sets & baselines)

    • Reliability and robustness (edge cases, regression)

    • Integration (pipelines, APIs, apps, services).

  • Create test plans, scenarios, matrices, and evaluation datasets (incl. synthetic data where appropriate). Create benchmarks to evaluate AI and LLM responses on various complex tasks.

  • Request & prioritize automation enablers from AI/Software Engineers & MLOps (evaluation harnesses, fixtures, page objects, seed datasets, telemetry probes).

  • Help create and label datasets to train AI models

  • Execute functional tests for AI features:

    • correctness & error handling;

    • retrieval quality;

    • prompt/chain testing;

    • component & app integration;

    • data ingestion;

    • device specific (e.g., Mobile);

    • exploratory.

  • Execute non-functional tests:

    • install/upgrade & model artifact compatibility;

    • usability;

    • performance/Capacity/Scalability & Cost;

    • security (with Security team scenarios);

    • reliability/Availability;

    • internationalization & localization;

    • safety/RAI (hallucination rate, toxicity filters, bias checks).

  • Record results per R&D methods; log and severity classify defects (data, model, prompt, guardrail, service).

  • Verify fixes and quality exit criteria at each gate; escalate schedule/quality issues requiring a recovery plan; provide assessment on tested scope.

  • Validate updated AI models by benchmarking the global user workflows against previous versions.

  • For Cloud/AI services, manage and stabilize test environments (datasets, seeds, feature flags, model versions).

  • Automate replay of scenarios using delivered enablers; continuously improve suites for efficiency & coverage; optimize automated cases.

  • Leverage usage analytics & user feedback to harden tests and expand gold sets; submit requests for new enablers to increase automation.

  • Track statistical variability of responses of AI models in the context of global user workflows

  • Support defect resolution across data, model, prompt, and code changes; advocate for customer expectations of AI quality and safety.

  • Share knowledge on AI testing techniques, datasets, evaluation harnesses, and lessons learned across teams; contribute to internal QA/RAI communities.

  • Comply with R&D processes and meet Key Activity & Performance Indicators.

  • You will be developing/leveraging AI test frameworks to automate the process of testing applications/software.

  • Good understanding of AI/LLM concepts (Deep understanding of Generative AI, RAG architectures, prompt engineering and vector databases.)

  • Evaluation Frameworks: Hands-on experience with tools like Ragas, DeepEval, Promptflow, LangSmith, or Gantry

  • Statistical Thinking: Ability to apply scoring rubrics and understand variance, distributions, and confidence intervals in probabilistic outputs (Good to have)

  • Strong programming skills in: Python (preferred), Java, JavaScript or similar

Qualifications

  • 1 to 5 years of experience with Engineering Degree with 60% through out in all academics

  • You should have a pronounced taste for software quality and the proper functioning of AI based applications with advanced skills in Test Design and Coverage.

  • Experience working with CI/CD tools Jenkins, SVN, Git Lab.

  • Good to have experience with API, Database Test /UI automation.

  • Your curiosity, rigor, pro-activeness as well as your interpersonal skills will be essential to succeed in this position.

  • You should be  fluent in written and spoken English.

What’s in it for you

  • Work in a culture of collaboration and innovation

  • Be at the forefront of building software products that are being deployed in mission-critical projects world wide

  • Be offered avenues to develop yourself for career progression

  • Not a low-level development opportunity, as you would get to work in the real business world and with a wide range of customers & coworkers

  • Work on a variety of technologies, products and solutions

Inclusion statement

As a game-changer in sustainable technology and innovation, Dassault Systèmes is striving to build more inclusive and diverse teams across the globe. We believe that our people are our number one asset and we want all employees to feel empowered to bring their whole selves to work every day. It is our goal that our people feel a sense of pride and a passion for belonging. As a company leading change, it’s our responsibility to foster opportunities for all people to participate in a harmonized Workforce of the Future.
ENOVIA Logo > Dassault Systèmes

ENOVIA ermöglicht es Unternehmen und Brancheninnovatoren, gemeinsam einen erfolgreichen Plan zu erstellen und umzusetzen, um Marktchancen in Marktvorteile zu verwandeln.

Möchten Sie mehr erfahren?

In den anderen Bereichen unserer Website finden Sie weitere Informationen.

Studierende und Absolventinnen/Absolventen

Seien Sie Teil der Zukunft unserer Arbeitskräfte - informieren Sie sich über Praktikums- und Jobangebote.

Ihr Weg in unser Team

Erfahren Sie, wie Sie Teil unseres Teams werden.

Unsere Kultur und Werte

Entdecken Sie unsere Kultur und Werte.