Thomas grew up in Galveston and went to a large, poor public high school that the New York Times ranked as one of the 2% in Texas that are "academically unacceptable". At high school she began programming in C++. Both of her parent have graduate degrees. Thomas earned her bachelor's degree in mathematics at Swarthmore College, where she came across significant discriminatory behavior. At Swarthmore she was elected to the Phi Beta Delta honor society. She moved to Duke University for her graduate studies, and finished her PhD in mathematics in 2010. Her doctoral research involved a mathematical analysis of biochemical networks. During her doctorate she completed an internship at RTI International where she developed Markov models to evaluate HIV treatment protocols. Thomas joined Exelon as a quantitative analyst, where she scraped internet data and built models to provide information to energy traders. In 2013 Thomas joined Uber where she developed the driver interface and surge algorithms using machine learning. She then became a teacher at Hackbright Academy, a school for women software engineers.
Research and career
Thomas joined the University of San Francisco in 2016 where she founded and now directs the Center for Applied Data Ethics. Here she has studied the rise of deepfakes, bias in machine learning and deep learning. When Thomas started to develop neural networks, only a few academics were doing so, and she was concerned that there was a lack of sharing of practical advice. Whilst there is a considerable recruitment demand for artificial intelligence intelligence researchers, Thomas has argued that even though these careers have traditionally required a PhD, access to supercomputers and large data sets, these are not essential prerequisites. To overcome this apparent skills gap, Thomas established Practical Deep Learning For Coders, the first university accredited open access certificate in deep learning, as well as creating the first open access machine learning programming library. Thomas and Jeremy Howard co-founded fast.ai, a research laboratory that looks to make deep learning more accessible. Her students have included a Canadian dairy farmer, African doctors and a French mathematics teacher. Thomas has studied unconscious bias in machine learning, and emphasised that even when race and gender are nor explicit input variables in a particular data set, algorithms can become racist and sexist when that information becomes latently encoded on other variables. Alongside her academic career, Thomas has called for more diverse workforces to prevent bias in systems using artificial intelligence. She believes that there should be more people from historically underrepresented groups working in tech to mitigate some of the harms that certain technologies may cause as well as to ensure that the systems created benefit all of society. In particular, she is concerned about the retention of women and people of colour in tech jobs. Thomas serves on the Board of Directors of Women in Machine Learning. She served as an advisor for Deep Learning Indaba, a non-profit which looks to train African people in machine learning. In 2017 she was selected by Forbes magazine as one of 20+ "leading women" in artificial intelligence.
Work on data ethics and diversity
Thomas is concerned about the lack of diversity in AI, and believes that there are a lot of qualified people out there who are not getting hired. She has particularly focused on the problem of poor retention of women in tech, noting that 41% of women working in tech leave within 10 years, over twice the attrition rate for men, and for those with advanced degrees, who 176% more likely to leave. Thomas believes AI's "cool and exclusive aura" needs to be broken in order to unlock it for outsiders, and to make it accessible to those with non-traditional and non-elite backgrounds.