Xuewen Yao

PhD student

The University of Texas at Austin


I am a PhD student at Daily Activity Lab at the University of Texas at Austin. I am interested in leveraging wearable/mobile sensors and machine learning algorithms to analyze and model human signals. In particular, I study the (stress) signals from mother-infant pairs collected longitudinally through their daily interactions. I am supervised by Prof. Kaya de Barbaro and Prof. Edison Thomaz.

“In-the-wild” dataset and individual differences are two characteristics in my research. “In-the-wild” datasets are much noisier and harder to work with comparing to lab data, but they reflect and better model our real world. I intend to build models that can detail individual differences as in health-related applications, it is not enough to have a satisfactory mean accuracy. The model needs to be good with each data point.

I have worked with motion and audio sensors extensively and developed models for parent holding behaviors, parent affect classification and infant crying/fussing classification.


  • Wearable Computing
  • Deep Learning
  • Activity Recognition
  • Affect Detection
  • Natural Language Processing


  • PhD in Electrical and Computer Engineering, 2023 (Expected)

    The University of Texas at Austin

  • MSc in Computer Science, 2018

    Gerogia Institute of Technology

  • BEng in Information Engineering, 2016

    City Univeristy of Hong Kong



Software Engineer Intern


2020-05-18 – 2020-08-07 online (covid-19)
  • Working with Apple Ad Platforms

Graduate Research Assistant

The University of Texas at Austin

2018-08-28 – Present Austin, TX
  • Collected and cleaned 780 hours of home recording audio data
  • Worked on audio classification of infant fussing and crying using deep neural nets
  • Incorporating the idea of individual differences in feature engineering and modelling
  • Web scraping 2 years’ conversations between Postpartum Support International and affected mothers and building a chatbot

Software Development Engineer Intern


2017-05-09 – 2017-07-28 Seattle, WA
  • Worked with Amazon Search User Experience
  • Used natural language processing to analyze user’s queries and search history
  • Developed an innovative model for recommendation
  • Expected annual sales in US grow by 101.7 million dollars.

Graduate Research Assistant

Georgia Institute of Technology

2016-08-22 – 2018-05-04 Atlanta, GA
  • Worked on activity recognition using wearable motion sensors
  • Soldered and compared the performance of multiple motion sensors
  • Synchronized data from different sensors to a single platform, examined plots of raw data, video annotations and implemented feature engineering
  • Detected patterns of proximity and physical contact and analyzed stress-related behaviors

Junior Researcher

City University London

2015-06-01 – 2015-08-01 London, UK
  • Worked on privacy-preserving speaker verification and identification
  • Self-studied and programmed MFCC (Mel-Frequency Cesptral Coefficients) and GMM (Gaussian Mixture Models) to achieve speaker verification in Matlab
  • Implemented randomization on speaker verification and proved its feasibility
  • Researched on real-life applications in speaker authentication

Recent Posts

A simple tutorial on sound classification

my experience of using different features, models to classify different sounds

How to use supercomputers from Texas Advanced Computing Center (TACC)

use TACC to run code / train model


  • 305 E. 23rd St., Austin, TX, 78741, United States
  • Institute of Mental Health Research, Patton Hall
  • Email Me