Jobs
Our portfolio companies are hiring in Portland and beyond. They're looking for bright, resourceful folks to be part of their growth. Check out opportunities with our startups below.
About mpathic.ai
Keeping the human in AI. mpathic is a trusted leader in advancing quality and safety in AI systems through expert-led evaluation and human data. We partner with leading technology companies to support red teaming, trust & safety, expert annotation, and model evaluation across high-stakes domains.
Our reviewers bring deep expertise in behavioral analysis, conversational design, mental health, and increasingly, financial and enterprise decision-making contexts.
About the Role
mpathic is seeking part-time Financial Experts to support a red-teaming and quality assurance (QA) campaign focused on evaluating AI system behavior in consumer-facing financial interactions.
In this role, you will review AI-generated responses and multi-turn conversations to identify risks related to financial guidance, inappropriate agreement (e.g., sycophancy), overconfidence, and failure to appropriately communicate uncertainty or limitations.
This is not a financial advising role. Instead, it focuses on evaluation and red teaming—specifically, adversarial thinking and expert judgment applied to AI outputs in simulated scenarios.
What You’ll Be Working On
You will help identify, prevent, and characterize risks that emerge when users engage AI systems in financial and general inquiry contexts.
Responsibilities may include:
- Reviewing AI-generated financial content and conversations for accuracy, appropriateness, and risk
- Identifying unsafe or misleading financial guidance (e.g., overconfident claims, risk minimization, inappropriate advice)
- Evaluating how AI systems handle uncertainty, disclaimers, and scope of knowledge in consumer-facing contexts
- Assessing whether models appropriately challenge or push back on risky or incorrect user assumptions
- Identifying patterns of sycophancy, over-alignment, or inappropriate agreement
- Evaluating multi-turn conversations for drift, escalation, and policy breakdown over time
- Participating in or reviewing red teaming exercises, including adversarial probing of AI systems to surface failure modes
- Evaluating how models respond under pressure, ambiguity, and escalating user intent
- Supporting quality assurance (QA) of red teaming outputs to ensure consistency and rigor
- Documenting edge cases, failure modes, and emerging risk patterns
- Providing structured written feedback to internal teams
- Collaborating with interdisciplinary teams on AI safety, policy, and evaluation frameworks
- Maintaining strict confidentiality and quality standards
This role requires strong judgment, attention to nuance, and comfort evaluating ambiguous or evolving scenarios.
What We’re Looking For
Successful candidates are thoughtful, detail-oriented, and able to apply financial expertise to assess risk, uncertainty, and appropriateness in conversational AI systems.
Basic Qualifications
- Professional experience in one or more of the following:
- Finance, investment analysis, or financial advising
- Banking, wealth management, or asset management
- Financial risk, compliance, or regulatory roles
- Corporate finance, accounting, or financial planning
- Strong understanding of:
- Financial risk, uncertainty, and decision-making
- Appropriate vs. inappropriate financial guidance in consumer contexts
- How non-experts interpret financial information
- Ability to identify:
- Overconfidence, misleading claims, or missing risk disclosures
- Inappropriate agreement with risky or incorrect user assumptions
- Failures in escalation, boundary setting, or uncertainty communication
- Strong written communication skills and ability to clearly explain reasoning
- Experience with or interest in:
- Red teaming, adversarial testing, or safety evaluation of AI systems
- Evaluating how systems fail under realistic user behavior
- Comfort working with AI tools and conversational outputs
- Ability to work remotely using Slack and standard productivity tools
- Comfort with ambiguity, iteration, and feedback-driven workflows
- Willingness to sign NDAs and work with sensitive content
- Availability ~10 hours per week for 8 weeks (starting in mid-April), with occasional scheduled meetings
Nice to Have (Not Required)
- Certifications (e.g., CFA, CFP, CPA, FRM)
- Experience in financial compliance or regulatory frameworks (e.g., SEC, FINRA)
- Background in consumer financial protection or financial education
- Experience with fintech, digital finance products, or robo-advisors
- Prior experience with AI evaluation, annotation, or safety work
- Interest in AI, NLP, or responsible technology
Compensation
$30-200/hour, depending on experience and specific project tasks/difficulty