_admin@rlr-systems:$ ./profile_init.sh --mode=interpretable

>> [ SYSTEM_BOOT ]
>> [ PROFILE_LOADED ]
>> [ STATUS: ONLINE | MODE: INTERPRETABLE_AI ]

RODERICK LAWRENCE RENWICK

> AI Systems Engineer | ML Researcher | Interpretability Specialist

📍 Bloomfield Hills, MI • RoderickLRenwick@gmail.com •

(248) 914-0569

• rodericklrenwick.com • github.com/RLR-GitHub
~/profile$ cat summary.txt

[SYSTEM_SPEC]

AI Systems Engineer with strengths in machine learning, computer vision, mechanistic interpretability, and embedded hardware–software co-design. Experience includes defense ML at Raytheon, safety-critical EV engineering at Ford, and autonomous-vehicle perception research. Builds transparent, reliable AI systems optimized for constrained hardware and safety-relevant environments.

~/career$ ls -l experience/

[EXPERIENCE_LOG]

Raytheon Technologies

Summer 2023

Research Intern

Machine Learning Research | Robotics Intelligence & Sensor Fusion

  • Built structured-light + camera fusion models for robotic anomaly detection in aerospace/defense applications.
  • Developed interpretability tools for activation tracing and fused-sensor reasoning to ensure model transparency.
  • Created reproducible diagnostics workflows for safety-aligned ML prototyping in high-stakes environments.

Ford Motor Company

Summer 2017

Engineering Intern

EV Functional Safety | Electric Vehicle Systems

  • Supported ISO 26262 functional-safety engineering for EV control systems and battery architectures.
  • Built logic-tracing and failure-propagation tools for embedded software safety verification teams.
  • Contributed to system-behavior reasoning and safety-case documentation for production vehicles.

MDAS.ai Autonomous Vehicle Research

Summer 2018

Research Assistant

Perception Systems | University of Michigan–Dearborn

  • Built CAN/IMU/CV multimodal fusion pipeline on NVIDIA TX2 for autonomous shuttle perception research.
  • Designed visualization tools to inspect modality contributions and temporal reasoning in real-time systems.
  • Supported real-world AV data collection and iterative perception-model refinement for edge deployment.
~/projects$ find . -type f -name "*.py"

[PROJECTS]

./ai_manifold_interpreter.py

Geometric latent-space interpretability framework for analyzing representational structure and concept trajectories in neural networks.

./attribution_graphs.py

Directed influence-path tracing for circuit-level model reasoning and mechanistic interpretability analysis.

./catnet_embedded_cv.py

Embedded UNet+CNN computer vision system with integrated interpretability layers for constrained hardware deployment.

./remy_wgan_gp_sandbox.py

Generative-model interpretability sandbox with gradient/latent visualization hooks for WGAN-GP training analysis.

./fpga_mlp_accelerator.vhdl

VHDL hardware neural network emphasizing deterministic, inspectable activation flow for FPGA deployment.

~$ cat skills.json

[CORE_SKILLS]

ml_cv:

["PyTorch", "TensorFlow", "JAX", "CNNs", "UNet", "Generative Models", "Sensor Fusion"]

interpretability:

["Attribution Graphs", "Activation Analysis", "Latent Geometry", "Circuit Reasoning", "Failure-Mode Analysis"]

hardware_embedded:

["FPGA (VHDL/Verilog)", "Vivado", "Quartus", "Hardware NNs", "NVIDIA TX2/Jetson", "Fixed-Point Logic"]

systems_tools:

["Python", "C++", "Linux", "Docker", "ONNX", "CAN Bus", "Reproducible Pipelines"]

math_modeling:

["Optimization", "Bayesian Inference", "Stochastic Processes", "Signal Processing"]

~$ cat education.log

[EDUCATION]

Purdue University

M.S., Electrical & Computer Engineering

Focus: ML, Neural Networks, Interpretability, Embedded Systems, VLSI-aware Deep Learning

Graduated: December 2025

University of Michigan, Dearborn

B.S., Computer Engineering (With Distinction)

Capstone: VHDL-based neural network accelerator

Winter 2020

~$ cat research_interests.txt

[RESEARCH_INTERESTS]

  • Interpretability & Mechanistic Analysis
  • AI Safety & Robustness
  • Embedded ML & Edge AI
  • Multimodal Perception & Sensor Fusion
  • HW–SW Co-Design & FPGA Acceleration
  • Representation Geometry & Latent Space Analysis
~/ai$ python generate_cover_letter.py --mode=interpretable

[AI_COVER_LETTER_GENERATOR]

[OUTPUT]:

Last Updated: 2025-01-13 | Version: 3.0.0-interpretable