Agentic AI Data Scientist & MindBridge

Who Is MindBridge & How Are They Changing the World

MindBridge is on a mission to bring clarity, speed, and trust to financial data. Their AI-powered platform helps finance professionals and auditors surface anomalies and potential risk in massive datasets. What once took weeks of manual effort is now flagged automatically with the power of machine learning. With growing enterprise adoption and strong backing from their investors (PSG), MindBridge is scaling its platform and evolving from its audit-tech roots into a broader financial risk solution.

How Will I Make An Impact?

Reporting to the AI Architect, you’ll join MindBridge’s Applied AI team and own the full lifecycle of LLM-powered autonomous systems - problem framing, experimentation, productionization, and governance. This is a hybrid discovery + delivery mandate: you’ll design and scale multi-step, tool-using agentic workflows (including multi-agent patterns like planners, reviewers, and supervisors), ground behavior in enterprise data via RAG, and implement memory and prompting strategies that hold up in real production contexts.

You’ll also bring a strong data science and experimentation lens - building the scoring/ranking/policy models agents depend on, designing rigorous evaluations against baselines, and quantifying business impact. Because MindBridge operates in enterprise and regulated contexts, you’ll help build systems that are safe and resilient by default: guardrails, escalation paths, human-in-the-loop patterns, observability, audit trails, and governance practices that keep agent behavior explainable and reliable over time.

How Do I Know If This Is For Me?

  • You’ve shipped applied ML/AI systems into production, not just notebooks or demos.
  • You’ve built or scaled LLM-powered workflows (RAG, tool use, prompt strategies, memory).
  • You can design agentic systems that reason/plan/act across multi-step business workflows.
  • You like experimentation: baselines, metrics, offline evals, and online testing to prove impact.
  • You care about reliability + governance: guardrails, escalation, explainability, auditability.
  • You understand production realities: monitoring, tracing, cost/performance tradeoffs, failure modes.
  • You’re comfortable partnering closely with engineering + product to ship real systems.
  • You can bridge prototype speed with enterprise-grade robustness and maintainability.
  • You’re excited about driving impact on a lean but high growth team

Our Ideal Candidate Looks Like:

  • 5+ years in applied ML, data science, or ML engineering with production systems experience
  • Strong Python proficiency and fluency with ML tooling (PyTorch, TensorFlow, NumPy, Pandas, or similar)
  • Hands-on experience with LLMs and generative AI (prompt engineering, RAG, fine-tuning)
  • Experience with agentic/orchestration frameworks or equivalent multi-step AI workflow development
  • Practical MLOps experience (MLflow, Kubeflow, Airflow, Vertex AI, SageMaker, or similar)
  • Solid software engineering fundamentals: version control, testing, CI/CD, code review
  • Production experience with autonomous LLM/agentic systems
  • Knowledge of distributed systems and cloud platforms (AWS, GCP, Azure) with containerization (Docker, Kubernetes)
  • Background in RAG architectures, reinforcement learning, or multi-agent coordination
  • Experience with data platforms (warehouses, streaming systems, vector databases)
  • Familiarity with AI governance and risk management in enterprise/regulated environments
  • Track record of mentoring engineers and leading cross-functional initiatives

We understand, accept, and value the differences between people of different backgrounds, genders, sexual orientations, ages, beliefs, and abilities. We are happy to make any accommodations you may need throughout the interview process.  We aim to create an inclusive environment and encourage diverse individuals to apply.

Vacancy:

This role is a newly created position

Salary:

$152,000-195,000 

In addition to base salary, this role is eligible for an annual performance bonus.

Final compensation will be determined based on experience, skill set, and alignment with the role requirements.

The Process:

  • Screening with Artemis
  • Intro + technical deep dive with AI Architect + a team member
  • Leadership interview with CTO
  • HR interview with VP People & Culture

And of course your Artemis Canada consultant Tara will work closely with you throughout every step of the process.

We’d love to hear from you - even if you don’t meet 100% of the requirements! Send a note to tara@artemiscanada.com if you or someone you know is interested!

Tara Stevens of Artemis Canada

Does this sound exciting to you?

Let's chat!