Software Engineer- GenAI Developer
GenAI Developer
The Customer Success organization at Dassault Systèmes is dedicated to empowering our customers' success through unparalleled support and knowledge. As a Senior Customer Success GenAI Developer, you will serve as a technical anchor for our next-generation AI ecosystem. In this role, you will architect and implement advanced Enterprise RAG and GraphRAG systems, while contributing to the evolution of resilient Multi-Agent ReAct architectures.
This is a code-intensive position that demands a mastery of Python and a commitment to software craftsmanship. You will own the full engineering lifecycle—from high-throughput data ingestion via Apache NiFi and Kafka to the deployment of stateful, autonomous agents. As a key technical contributor, you will drive engineering rigor through Pydantic-based validation, rigorous code reviews, and the implementation of automated AI evaluation frameworks. You will also mentor junior developers, fostering a culture where AI is built as a robust, secure, and observable software discipline.
Role Description & Responsibilities:
AI Engineering & Agentic Architecture
Architectural Mastery: Expert design of Enterprise RAG and GraphRAG (utilizing Neo4j) with a focus on hybrid search, query expansion, and cross-encoder reranking.
Agentic Frameworks: Proven success in building stateful, multi-agent systems using LangGraph, Haystack, or Microsoft Semantic Kernel, specifically focusing on ReAct (Reason + Act) loops and autonomous function calling.
Scientific Evaluation: Experience implementing automated evaluation frameworks (e.g., RAGAS, DeepEval, or G-Eval) within the CI/CD pipeline to scientifically measure faithfulness, relevancy, and retrieval precision.
Model Optimization: Hands-on experience with fine-tuning open-source LLMs (Llama, Mistral) and implementing model quantization and prompt caching to maximize throughput and minimize latency.
- Software Craftsmanship & Pythonic Rigor
Advanced Python Engineering: Proficiency in idiomatic Python, including expert-level use of Asyncio for high-concurrency LLM calls and Pydantic (v2) for strict data validation and schema enforcement.
Production Quality: High commitment to writing maintainable, well-tested code. Experience conducting deep-dive code reviews that focus on architectural consistency and performance.
AI Security & Governance: Knowledge of AI security best practices, including Prompt Injection mitigation, PII masking (e.g., Microsoft Presidio), and the use of AI Gateways (e.g., LiteLLM, Portkey) for governance.
- Data Engineering & Observability
Data Ingestion Orchestration: Proficiency in Apache NiFi for designing automated, complex data flows, alongside experience in data manipulation and cleaning at scale.
Stream Processing: Proficiency in integrating Kafka for real-time data ingestion and event-driven triggers for AI agent workflows.
MLOps & Lifecycle: Expert use of MLFlow for tracking experiments, versioning models, and managing the end-to-end model deployment lifecycle.
Full-Stack Observability: Knowledge of the Grafana (LGTM) stack, OpenTelemetry, and Traceloop to monitor agentic reasoning paths, token costs, and system health.
- Cloud Mastery & Performance (Highly Desirable)
Cloud-Native AI: Experience deploying AI workloads on AWS (EKS, Bedrock), Azure (AKS, OpenAI Service), or GCP (GKE, Vertex AI).
Cost & Latency Engineering: Proven ability to implement Semantic Caching (e.g., Redis) and intelligent model routing to optimize the cost-to-performance ratio.
Infrastructure as Code (IaC): Familiarity with Terraform, Pulumi, or Bicep to provision cloud-scale vector stores and GPU-accelerated compute instances.
Qualifications:
Professional Experience: Software engineering experience, with 3 to 5yrs specifically focused on deploying Generative AI and LLM applications in a production environment.
Industry Background: Prior experience in B2B SaaS, Customer Success, or high-scale Enterprise Support environments is strongly preferred.
Leadership in Code: Demonstrated experience mentoring junior developers and leading architectural discussions in a collaborative team setting.
Education: Master’s or Bachelor’s in Computer Science, AI, Data Science, or a related field.
- Highly Desirable Professional & Expert Certifications
Note: Only Professional, Specialty, and Expert level certifications are requested.
- AI & Machine Learning
· AWS: Certified Machine Learning – Specialty
· GCP: Professional Machine Learning Engineer
· Databricks: Generative AI Engineer Professional
· NVIDIA: Professional-level certifications in GenAI or Deep Learning (DLI)
Cloud & Solutions Architecture
AWS: Certified Solutions Architect – Professional OR Certified DevOps Engineer – Professional
Azure: Solutions Architect Expert (AZ-305) OR DevOps Engineer Expert (AZ-400)
GCP: Professional Cloud Architect
- Highly Desirable Professional & Expert Certifications
What’s in it for you
Work in a culture of collaboration and innovation
Ensure knowledge sharing within the team
Able to work well in a team environment and communicate effectively with co-workers, both local and remote.
Working with a Multinational culture company.
Inclusion statement
人類が未来を切り拓くとき、その試みを後押しするのがダッソー・システムズです。私たちは、持続可能なイノベーションを実現させるためのコラボレーティブなバーチャル環境を提供しています。3DEXPERIENCEプラットフォームとアプリケーションを使って実世界のバーチャルツイン・エクスペリエンスを構築することで、私たちは世界150カ国以上、あらゆる業界、あらゆる規模の35万以上の顧客に価値を提供しています。世界2万3,800人以上から成る、情熱にあふれた私たちのコミュニティの一員になりませんか。
ダッソー・システムズについて、
当サイトのその他のセクションでさらに詳しくご覧ください。
学生&新規学卒者
働き方の未来 - インターンシップおよび採用情報をご覧ください。
採用情報
採用情報をご覧ください。
企業文化&価値観
ダッソー・システムズの企業文化&価値観をご覧ください。