Software Engineer- GenAI Developer
GenAI Developer
The Customer Success organization at Dassault Systèmes is dedicated to empowering our customers' success through unparalleled support and knowledge. As a Senior Customer Success GenAI Developer, you will serve as a technical anchor for our next-generation AI ecosystem. In this role, you will architect and implement advanced Enterprise RAG and GraphRAG systems, while contributing to the evolution of resilient Multi-Agent ReAct architectures.
This is a code-intensive position that demands a mastery of Python and a commitment to software craftsmanship. You will own the full engineering lifecycle—from high-throughput data ingestion via Apache NiFi and Kafka to the deployment of stateful, autonomous agents. As a key technical contributor, you will drive engineering rigor through Pydantic-based validation, rigorous code reviews, and the implementation of automated AI evaluation frameworks. You will also mentor junior developers, fostering a culture where AI is built as a robust, secure, and observable software discipline.
Role Description & Responsibilities:
AI Engineering & Agentic Architecture
Architectural Mastery: Expert design of Enterprise RAG and GraphRAG (utilizing Neo4j) with a focus on hybrid search, query expansion, and cross-encoder reranking.
Agentic Frameworks: Proven success in building stateful, multi-agent systems using LangGraph, Haystack, or Microsoft Semantic Kernel, specifically focusing on ReAct (Reason + Act) loops and autonomous function calling.
Scientific Evaluation: Experience implementing automated evaluation frameworks (e.g., RAGAS, DeepEval, or G-Eval) within the CI/CD pipeline to scientifically measure faithfulness, relevancy, and retrieval precision.
Model Optimization: Hands-on experience with fine-tuning open-source LLMs (Llama, Mistral) and implementing model quantization and prompt caching to maximize throughput and minimize latency.
- Software Craftsmanship & Pythonic Rigor
Advanced Python Engineering: Proficiency in idiomatic Python, including expert-level use of Asyncio for high-concurrency LLM calls and Pydantic (v2) for strict data validation and schema enforcement.
Production Quality: High commitment to writing maintainable, well-tested code. Experience conducting deep-dive code reviews that focus on architectural consistency and performance.
AI Security & Governance: Knowledge of AI security best practices, including Prompt Injection mitigation, PII masking (e.g., Microsoft Presidio), and the use of AI Gateways (e.g., LiteLLM, Portkey) for governance.
- Data Engineering & Observability
Data Ingestion Orchestration: Proficiency in Apache NiFi for designing automated, complex data flows, alongside experience in data manipulation and cleaning at scale.
Stream Processing: Proficiency in integrating Kafka for real-time data ingestion and event-driven triggers for AI agent workflows.
MLOps & Lifecycle: Expert use of MLFlow for tracking experiments, versioning models, and managing the end-to-end model deployment lifecycle.
Full-Stack Observability: Knowledge of the Grafana (LGTM) stack, OpenTelemetry, and Traceloop to monitor agentic reasoning paths, token costs, and system health.
- Cloud Mastery & Performance (Highly Desirable)
Cloud-Native AI: Experience deploying AI workloads on AWS (EKS, Bedrock), Azure (AKS, OpenAI Service), or GCP (GKE, Vertex AI).
Cost & Latency Engineering: Proven ability to implement Semantic Caching (e.g., Redis) and intelligent model routing to optimize the cost-to-performance ratio.
Infrastructure as Code (IaC): Familiarity with Terraform, Pulumi, or Bicep to provision cloud-scale vector stores and GPU-accelerated compute instances.
Qualifications:
Professional Experience: Software engineering experience, with 3 to 5yrs specifically focused on deploying Generative AI and LLM applications in a production environment.
Industry Background: Prior experience in B2B SaaS, Customer Success, or high-scale Enterprise Support environments is strongly preferred.
Leadership in Code: Demonstrated experience mentoring junior developers and leading architectural discussions in a collaborative team setting.
Education: Master’s or Bachelor’s in Computer Science, AI, Data Science, or a related field.
- Highly Desirable Professional & Expert Certifications
Note: Only Professional, Specialty, and Expert level certifications are requested.
- AI & Machine Learning
· AWS: Certified Machine Learning – Specialty
· GCP: Professional Machine Learning Engineer
· Databricks: Generative AI Engineer Professional
· NVIDIA: Professional-level certifications in GenAI or Deep Learning (DLI)
Cloud & Solutions Architecture
AWS: Certified Solutions Architect – Professional OR Certified DevOps Engineer – Professional
Azure: Solutions Architect Expert (AZ-305) OR DevOps Engineer Expert (AZ-400)
GCP: Professional Cloud Architect
- Highly Desirable Professional & Expert Certifications
What’s in it for you
Work in a culture of collaboration and innovation
Ensure knowledge sharing within the team
Able to work well in a team environment and communicate effectively with co-workers, both local and remote.
Working with a Multinational culture company.
Inclusion statement
Dassault Systèmes is a catalyst for human progress. We provide business and people with collaborative virtual environments to imagine sustainable innovations. By creating virtual twin experiences of the real world with our 3DEXPERIENCE platform and applications, we bring value to more than 350,000 customers of all sizes, in all industries, in more than 150 countries. Join our global community of more than 23,800 passionate individuals!
Want to learn more?
Visit the other sections of our website to find out more.
Our Culture & Values
Discover our culture and values.
Your Recruitment Journey
Get to know about your recruitment journey.
Students & Graduates
Be part of the future of our workforce – check out internship and job opportunities.