Platform

Collaboratory

Resources

AI Q&A: Tesla Wells

AI Q&A

AI Ethics

Machine Learning/LLMs

AI Misconceptions

Perspectives

Isabella Roden

2024-06-04

Image of Tesla Wells
As an algorithm engineer for Mobi, Tesla works on modeling intent so that Mobi’s AI can understand a person’s values and objectives, allowing the AI to produce better plans. Tesla is currently completing a PhD in Autonomous Robotics at MIT and was a part of Mobi Chief Scientist Brian Williams’ AI robotics research group. As someone working on AI’s bleeding edge, Tesla is passionate about the opportunities and ethics of AI.

“People think of computers as infinite resources, but that isn’t true.”

1. What do you think is one of the biggest misconceptions about AI?

People tend to define AI as something magical or mysterious, so as soon as we develop a way for a machine to do something intelligent, people will say “well that’s not AI because I understand it.” There’s a mystique around not understanding something, like how we don’t understand how human intelligence works, which makes it difficult to engage with the tangible parts of AI that have to deal with truth-seeking or having a better understanding of the world or reasoning or making decisions about the world.

This means that outside of academia, the field has a strong relationship to hype—people get excited about an AI advancement, but then things crash when something doesn’t deliver. There’s a feeling like something is about to breakthrough into an unknown area, when actually the recent tech is very explainable and understandable. When something can’t be understood it becomes glamorized.

2. What’s unique about how Mobi is working with AI?

Mobi really engages with all types of AI, instead of just specifically machine learning and language processing (LLMs). These types of AI are really good at processing trends in large amounts of data and there’s a lot of low-hanging fruit that a small team can produce with just that, so it’s very popular as a business structure for startups. But Mobi has intentionally invested in people that have specialties in AI areas like generating schedules and routes, multi-agent artificial intelligence, and even inference. So we pull from a bunch of different subdisciplines of AI that require a bit of specialization and more depth of math or research knowledge in order to construct really robust systems that have multiple use cases and are very high quality given the amount of data that we have.

Another unique part of what we’re doing here is that we’re able to explain a lot of the decision points in our AI algorithms which helps other people understand our software better and provides more transparency. The other thing that should be acknowledged is that when you’re able to provide reasoning it allows you to take more responsibility for the products you make and things that you develop because you can make better guarantees that there is a specific process for generating your outputs. For example, a lot of companies have had to put disclaimers saying that results generated by LLMs aren’t always accurate, and for companies that use chatbots they have to say “we’re not responsible if the chatbot says something incorrect” because everyone knows these types of AI tend to lie in order to generate an answer. So our process is more trustworthy.

“When something can’t be understood it becomes glamorized.”

3. What are the biggest challenges of working with AI right now?

So much of the field in general is being led by hype circles which make it difficult to get funding for tech that depends on reason and facts. People think of computers as infinite resources, but that isn’t true. Many companies with access to large amounts of data and compute power can brute force a lot of specific types of AI, which means that startups get pushed out or even other countries get pushed out because they don’t have access to specific tools.

4. What is the most exciting future change that could come from AI?

I feel that when you are doing something with an individual you don’t need to necessarily abstract a lot of their needs and ideas—you can kind of just work with them as a person. But as you need to work with larger amounts of people and coordinate with them, you’re required to have more abstraction of what those people are and what they want or need. As you're modeling more people, you force them to fit into some sense of normality versus deviations from the norm and you also have some amount of error that’s introduced. So when you’re making decisions about lots of people you have an inherent bias against people that are “deviant” in some way. I’d love to see systems that can coordinate large numbers of people without punishing those that are different or erasing what makes them different from each other.

I really feel that this is a theme throughout modern history in which you have these bigger questions around centralized planning from both ends of the political spectrum. This inspired me to do AI research in the first place—I wanted to think of ways to model people in teams because when we have better models of individuals you get better results. The tradeoff is that it takes more time, but if we can capture their needs and wants we can make better plans for groups of people. Throwing computational power at these problems can reduce the abstraction needed for coordinating lots of people and that’s something I think AI could do well.

AI Q&A: Anna Jaffe

As the CEO of Mobi, Anna Jaffe’s technical expertise as an MIT graduate and industry leader has shaped her vision to use technology to solve large-scale, intractable problems.

Read article

Image of Simon Fang

AI Q&A: Simon Fang

Simon's work as a research scientist in AI continues to push beyond the state-of-the-art, discovering new techniques and synthesizing existing research to develop exciting new algorithms for planning and optimization.

Read article