Thoughts on The Worlds I See

The Worlds I See is a very good story and now I have some thoughts to share.

Thoughts on The Worlds I See
Cover art of The Worlds I See: Curiosity, Exploration and the Discovery at the Dawn of AI.2

Excerpt of the book

The Worlds I see by Dr. Fei-Fei Li is a story of science in the first person, documenting one of the century’s defining moments from the inside. It provides a riveting story of a scientist at work and a thrillingly clear explanation of what artificial intelligence actually is—and how it came to be. Emotionally raw and intellectually uncompromising, this book is a testament not only to the passion required for even the most technical scholarship but also to the curiosity forever at its heart.

McMillian

One of my goals this year is to learn more about AI and after listening to Dr. Fei-Fei Li on Armchair Expert podcast, I knew this book was worth reading. The Worlds I See is both a biography and an introduction to the field of AI. Biographies have an advantage. They follow a chronology where you learn about the person (and in this case AI) piece by piece until you have a foundation.

💡
The Worlds I See is very good story.

A Brief history

Advancements happen in small chunks over time. Most people weren't aware of AI until the end of 2022 when ChatGPT exploded onto the scene kicking off this bubble we are in now. However, development has taken some 70 years. That's a long time to become an overnight success!

Chapter 3 highlighted some interesting moments in AI history:

  • In 1956 a 2 month, 10 person research proposal is submitted to Dartmouth College to study artificial intelligence. The goal was to precisely describe every aspect of learning so that a machine could simulate it.
  • In the 1970's Ed Feigenbaum, a SAIL researcher, created a field known as "knowledge engineering".
    • The goal is to bring together facts about a domain like medicine into libraries machines can analyze. The machines can then answer questions like a human would.
    • There were early successes but the logistical problems with handling the sheer volume of information were too much.
    • Research stalls
  • In late 1980s and early 1990s attention was being paid to algorithms that could solve problems by discovering patterns from examples rather than explicit programing.
    • Researchers coin the term "machine learning".
  • In 1986 a group of researchers published a technique called "backpropogation". Essentially the more a network is exposed to examples, the more it adjusts the weighting assigned to errors. These error adjustments get sent backwards through the network from the output layer to the input layer for greater accuracy next time.
    • One of those researchers was Geoff Hinton.
  • Yann LeCun, a researcher at Bell Labs began demonstrating the capabilities of a "neural network" to accurately recognize hand writing. He showed the network thousands of examples of human handwritten zip codes (including mistakes) so it could learn patterns of digits.
    • The work was so accurate and successful that within a few years it was deployed in ATMs around the US to read digits written on checks.
    • LeCun was a student of Geoff Hinton
    • Randomly I’ve been following Yann LeCun on Twitter for a while because his AI insights have been good.

A small community

I don't study the field of AI, nor the people behind the advancements. Yet there were many names in the book I recognized. Many started in Academia and then went into the industry to work at impactful companies. Some started their own companies.

Dr. Li eventually moves her computer vision lab from Princeton to Stanford. (Stanford has a Lab called SAIL -> Stanford Artificial Intelligence Lab). Two people who had run (or been involved with) SAIL in the past were familiar names:

  • Sebastian Thrun → He built Google's self driving car unit. Then created Udacity and launched its first course on self driving cars.
  • Andrew Ng → created Coursera (along with a few other startups) and hosts AI educational videos on deeplearning.ai.

A few other people were in Dr. Li's lab:

  • Andrej Karpathy → He was an early employee of OpenAI and left to build Telsa’s self driving car / autopilot program. He's given a lot of talks about working for Tesla on this program.
  • Timnit Gebru → I remember hearing about her AI work at Google and subsequently being forced out amid controversy.

It's not a surprise given how small of a community AI research was that people go between being researchers, teachers and entrepreneurs.

She’s a badass

Dr. Li’s story is filled with random explorations and chances encounters. Her family moved to the US when she was in her teens. English as a second language, outsider, having to work very hard just to keep up with her peers. Working on weekends to support her family. Her mom has health issues she has to deal with because her Dad, although very present, wasn’t a solid parental figure.

She learns english by reading english classics. This bonds her to one of her high school professors who plays the part of a second Dad later on in her life.

She hustles her way to a doctorate in computer vision, then to a teaching position at Princeton and later Stanford. Li marries but her husband is a professor at another school and so they carry on a long distance relationship. Then she gets pregnant and has to raise a kid, handle her parents, lead a lab, on her own. It’s a lot.

She's had an impactful career. Dr. Li and one of her PhD students took inspiration from WordNet and created ImageNet while at Princeton. ImageNet is is a massive (30,000 images at the time), publicly available image dataset complete with annotations.

When ImageNet comes out it is ignored. Why would anyone need such a massive set of images? So she created a competition to entice researchers to use the dataset and publishes the results. It takes time but in 2014 a team decides to use a neural net system during the challenge and the results blew the competition away.

Turns out this old concept of neural networks died out decades ago because there wasn't enough data available to train the models. Dr. Li's work brought neural networks back to the mainstream.

She’s a badass.

Explainability

Towards the end of the book Dr. Li presents this new field of research called "explainable AI" (xAi) or "explainability". It sounds an awful lot like testability. You design and build systems that can explain their decisions, so people can better understand how the system got to it’s outcome. Seems pretty important.

We're in AI 1.0. Watch out

One of the things Dr. Li says in the end is to remember we are in AI 1.0. Despite all the widespread uses of LLMs and AI to create content and the challenges that brings. Remember we are still in the beginning and there's a lot of progress to be made.

Subscribe to Shattered Illusion by Chris Kenst

Sign up now to get access to the library of members-only issues.
Jamie Larson
Subscribe