[SYSTEM_SPEC]
AI Systems Engineer with strengths in machine learning, computer vision, mechanistic interpretability, and embedded hardwareâsoftware co-design. Experience includes defense ML at Raytheon, safety-critical EV engineering at Ford, and autonomous-vehicle perception research. Builds transparent, reliable AI systems optimized for constrained hardware and safety-relevant environments.
[EXPERIENCE_LOG]
Raytheon Technologies
Summer 2023
Research Intern
Machine Learning Research | Robotics Intelligence & Sensor Fusion
- Built structured-light + camera fusion models for robotic anomaly detection in aerospace/defense applications.
- Developed interpretability tools for activation tracing and fused-sensor reasoning to ensure model transparency.
- Created reproducible diagnostics workflows for safety-aligned ML prototyping in high-stakes environments.
Ford Motor Company
Summer 2017
Engineering Intern
EV Functional Safety | Electric Vehicle Systems
- Supported ISO 26262 functional-safety engineering for EV control systems and battery architectures.
- Built logic-tracing and failure-propagation tools for embedded software safety verification teams.
- Contributed to system-behavior reasoning and safety-case documentation for production vehicles.
MDAS.ai Autonomous Vehicle Research
Summer 2018
Research Assistant
Perception Systems | University of MichiganâDearborn
- Built CAN/IMU/CV multimodal fusion pipeline on NVIDIA TX2 for autonomous shuttle perception research.
- Designed visualization tools to inspect modality contributions and temporal reasoning in real-time systems.
- Supported real-world AV data collection and iterative perception-model refinement for edge deployment.
[PROJECTS]
./ai_manifold_interpreter.py
Geometric latent-space interpretability framework for analyzing representational structure and concept trajectories in neural networks.
./attribution_graphs.py
Directed influence-path tracing for circuit-level model reasoning and mechanistic interpretability analysis.
./catnet_embedded_cv.py
Embedded UNet+CNN computer vision system with integrated interpretability layers for constrained hardware deployment.
./remy_wgan_gp_sandbox.py
Generative-model interpretability sandbox with gradient/latent visualization hooks for WGAN-GP training analysis.
./fpga_mlp_accelerator.vhdl
VHDL hardware neural network emphasizing deterministic, inspectable activation flow for FPGA deployment.
[CORE_SKILLS]
ml_cv:
["PyTorch", "TensorFlow", "JAX", "CNNs", "UNet", "Generative Models", "Sensor Fusion"]
interpretability:
["Attribution Graphs", "Activation Analysis", "Latent Geometry", "Circuit Reasoning", "Failure-Mode Analysis"]
hardware_embedded:
["FPGA (VHDL/Verilog)", "Vivado", "Quartus", "Hardware NNs", "NVIDIA TX2/Jetson", "Fixed-Point Logic"]
systems_tools:
["Python", "C++", "Linux", "Docker", "ONNX", "CAN Bus", "Reproducible Pipelines"]
math_modeling:
["Optimization", "Bayesian Inference", "Stochastic Processes", "Signal Processing"]
[EDUCATION]
Purdue University
M.S., Electrical & Computer Engineering
Focus: ML, Neural Networks, Interpretability, Embedded Systems, VLSI-aware Deep Learning
Graduated: December 2025
University of Michigan, Dearborn
B.S., Computer Engineering (With Distinction)
Capstone: VHDL-based neural network accelerator
Winter 2020
[RESEARCH_INTERESTS]
- Interpretability & Mechanistic Analysis
- AI Safety & Robustness
- Embedded ML & Edge AI
- Multimodal Perception & Sensor Fusion
- HWâSW Co-Design & FPGA Acceleration
- Representation Geometry & Latent Space Analysis