Haixi Zhang

Robotics & Perception | Real-time CV & Applied ML | Sensor Data & Systems Engineering

prof_pic.jpg

Hi! I’m Haixi Zhang, a Robotics / Perception / Applied ML / Computer Vision Engineer. I build real-world sensor-driven systems—from data pipelines and core algorithms to performance optimization and deployment on production/edge platforms. I hold a B.S. in Electrical & Computer Engineering (University of Rochester) and an M.Eng. in Electrical & Computer Engineering (Cornell University), and I’m currently based in San Jose, CA.

My focus is practical: turning high-rate sensor streams into reliable signals and models that run in real time, are testable, and can be shipped.

What I do

  • Sensor data & pipelines
    • Build ETL, synchronization, and validation workflows for multi-sensor logs (camera / LiDAR / IMU / CAN)
    • Curate datasets and define evaluation protocols for training, regression testing, and debugging
    • Automate large-scale processing with workflow tools (e.g., Airflow) and containerized jobs (Docker/K8s)
  • Real-time computer vision & applied ML
    • Develop perception modules such as detection, tracking, depth-related tasks, and feature extraction using Python + OpenCV + PyTorch
    • Train and evaluate models with clean metrics, ablations, and failure-case analysis (day/night, noise, motion, domain shifts)
    • Package models for deployment and iterate using an end-to-end data → model → evaluation loop
  • Signal processing & time-series analysis
    • Design filtering and time-series pipelines for sensor signals (denoising, feature extraction, temporal modeling)
    • Combine classical methods with learning-based approaches when it improves robustness and interpretability
  • Performance-oriented software engineering
    • Write production C++/Python on Linux; profile and optimize for latency/throughput and resource usage
    • Use multiprocessing/multithreading where appropriate; accelerate with GPU when needed
    • Deploy with ONNX / TensorRT on edge/production platforms and integrate into larger C++ systems
  • Robotics systems
    • Integrate perception into ROS/ROS2 systems (TF2, catkin/colcon) with reliable interfaces and testing
    • Work with autonomy foundations: estimation (EKF-style), mapping (occupancy grids), and planning (A*)

I’m comfortable contributing across the stack—from data and modeling to integration and benchmarking—and I enjoy collaborating with cross-functional teams to improve end-to-end system quality.

When I’m not immersed in code or research, I’m fascinated by world history, particularly the Renaissance period through the Second Industrial Revolution. There’s something captivating about how technological and social innovations from those eras laid the groundwork for today’s advancements. I’m also a big fan of Yes, Prime Minister – the wit and political satire never get old!


Interested in autonomous systems, computer vision, or just want to discuss the future of robotics? I’d love to connect!