Meet Dr. Fei-Fei Li, Technical Leadership Abie Award Winner

Meet Dr. Fei-Fei Li, Technical Leadership Abie Award Winner

The most prestigious accolade awarded by AnitaB.org, the Technical Leadership Abie Award celebrates a woman who has led a team that developed a product, process, or innovation that made a notable impact on business or society. This year’s winner is Dr. Fei-Fei Li, Professor and Director of Stanford University’s Human-Centered AI Institute.

An unprecedented thought leader in artificial intelligence (AI) through her revolutionary computer vision research, Fei-Fei has had transformational industry impact democratizing AI, pioneering future technological innovations, and advocating diversity in STEM and AI internationally. She also co-founded the Stanford AI Lab OutReach Summer program (SAILORS), which evolved into the nonprofit AI4ALL.

We spoke to Fei-Fei about her vision for the future of AI and her efforts to make the field more inclusive.

How did you get into the field of artificial intelligence (AI)?

I have always been very curious and passionate about understanding nature and math. I majored in physics at Princeton University when I was in college. What I loved about physics was those foundational questions and exploration of the origin of the universe. But as I studied physics, I found that some of the greatest physicists of 20th century, such as Einstein and Schrodinger, spent the last years of their lives asking questions about not the physical world but the biological world. They wondered about the meaning of life and intelligence, which really piqued my interest. I became very fascinated by how brain the works and the computation of intelligence. I think that kind of interdisciplinary background from physics to neuroscience eventually brought me to the research of AI.

The first memorable project I did in AI (or computer vision, as it was then known) was on “one-shot learning.” Humans are able to learn very quickly; for example, children can look at a picture book on animals and recognize those same animals in real life, all before they can speak in full sentences. My advisor, Prof. Pietro Perona of Caltech, and I explored how to enable one-shot learning in computers. It really ignited my interest in working on this intersection of AI problems that are highly inspired by human cognition.

What is the most rewarding aspect for you as a professor?

As a professor, my main job is to mentor and work with the future scientists of the world. I’ve had generations of undergraduate, master, doctorate, and post-doctorate students. I think my philosophy of mentoring is to focus on their individuality. Even though the students are studying similar sciences, they are all different. I try to understand them and empathize with them so I can best support the individual students. Through that process, I help these students realize their potential, which sometimes goes beyond their expectations. When that happens, it’s the most rewarding experience for both sides.

Tell us more about the Stanford Institute for Human-Centered AI (HAI) and its mission.

HAI is a newly established institute that we just founded this year. HAI is founded on three major principles. First, we believe the next generation AI technology will be further inspired by cross-pollinating brain sciences, cognitive science, and neuroscience with AI research. Second, we believe that we need to welcome the social sciences and humanity studies to help understand, anticipate, and guide AI societal, and human impact. This is especially true for critical issues like the future of work, the ethical code of AI, privacy, fairness, and bias, geopolitics, and many different human aspects of AI. Last, but not least, we believe that the real potential of AI is to enhance human capabilities, not to replace them.

We want to be thought leaders in demonstrating, developing, and designing such applications and technologies in human-centered AI. The idea behind HAI is to bring interdisciplinary thinking and research to guide the course of AI’s future and inform the public and the policymakers.

How has interdisciplinary thinking helped you in your own career?

My entire career has an underlying theme of doing interdisciplinary research. For one of my projects, I used cutting-edge computer vision algorithms to look at surgical videos and understand the movements surgeons do in training and in practice. I worked with Arnie Milstein, a professor and director at Stanford’s School of Medicine, Amy Jin, a high school student passionate about STEM, Serena Yeung, a Ph.D. student in my lab, and Jeff Jopling, a surgeon. It was an incredible collaboration of interdisciplinary researchers and students.

You are an advocate for more diversity in AI. Why is it so crucial to have women and underrepresented groups in this field?

Machine values reflect the human values that are embedded in the developers of technology. Sadly, this means underrepresented groups are more likely to be impacted in an adversarial way. It takes diverse life experiences to successfully solve problems that matter to different groups of people in different communities. We need to involve people of all backgrounds to ensure that technology is fair.

As an immigrant, I think my own cross-cultural background has allowed me to see rich and unique perspectives of the world and of human lives. I also work with a lot of diverse students, which gives me a further understanding of people from all over the world and different walks of life. I think this is a beautiful part of America, and I feel very lucky and privileged.

Aside from your work as a professor, you’ve helped many students through AI4ALL. What inspired you to form this program?

A few years ago, I was thinking about what the future of AI would look like — and more importantly, how AI would shape the future of our world. Luckily, Olga Russakovsky, a senior Ph.D. student in my lab, walked into my office and expressed her own desire to help change the future of AI, specifically around the representation of women in the field. Within minutes of sharing our ideas, we realized we wanted to do something together.

Towards the end of 2014, we started planning a pilot project that morphed into SAILORS, Stanford’s outreach summer program for high school girls. After two very successful years, we knew we wanted to spread this program nationwide and internationally, so we created the nonprofit AI4ALL. Our goal is to increase diversity representation in the field of AI through human-centered AI education. We currently have 11 university campuses throughout North America, and every program is customized to the local society.

What is your vision for the future of AI?

I really, truly hope AI can enhance the human condition. AI is a pretty prevalent, pervasive technology that should have the potential to improve healthcare, transportation, sustainability, education, and manufacturing. It’s becoming a human-centered interdisciplinary field that not only continues to push technology forward, but also works on critical human societal issues related to this technology. It shows how technology can be applied to make human conditions better.

I do think, just like AI is an interdisciplinary field, the people working in AI will be composed of people of all different backgrounds. I truly believe innovation and creativity gets boosted when a diverse group of people work together. We bring different ideas to the table and can work out solutions that are collectively smarter, more brilliant, and better for everyone. Bottom line is, when we include diversity into technology and the creation of a future world, everybody benefits.

Watch our GHC 19 videos of Dr. Fei-Fei Li.