490: Wait Until Physics Has Happened

Transcript from 490: Wait Until Physics Has Happened with Nikolaus Correll, Christopher White, and Elecia White.

EW (00:00:06):

Welcome to Embedded. I am Elecia White, alongside Christopher White. Our guest this week is Professor Nikolaus Correll. We are going to talk about robots, which of course is one of my favorite subjects.

CW (00:00:20):

Hi Nikolaus. Thanks for joining us.

NC (00:00:22):

Hi Elecia. Hi Chris. Nice to meet you. Thanks for having me on the show.

EW (00:00:26):

Could you tell us about yourself as if we met at, I do not know, a faculty new orientation dinner?

CW (00:00:34):

<laugh>

NC (00:00:36):

Yeah. I am a professor in computer science for 15 years now, at the University of Colorado. I do research on robotics. Most recently I started looking into generative AI models and humanoids, and apply those to robotic manipulation.

EW (00:00:54):

And of course, that is where I have all the questions. But first we want to do lightning round, which is short answers and we want short- Nope. It is short questions and we want short answers. And if we are behaving ourselves, we will not ask for a lot more detail like, "Can I be your grad student?" Are you ready?

NC (00:01:11):

Yep. Thank you.

CW (00:01:13):

You are really going to make me ask this question? This is listed in a thing. I did not come up with this. Which is better, Caltech or MIT? <laugh>

NC (00:01:23):

<laugh> I think MIT is probably better. More people.

EW (00:01:27):

Well, I am glad I did not put Harvey Mudd in there. Because he would have just said, "There are no people there."

NC (00:01:31):

Who?

EW (00:01:31):

Yeah.

CW (00:01:31):

<laugh>

EW (00:01:31):

Other than-

NC (00:01:33):

That is a mean question.

EW (00:01:34):

It is a mean question. <laugh>

CW (00:01:35):

<laugh>

NC (00:01:37):

Yes. Has a history.

EW (00:01:39):

Other than Robotics 101, what is the single most important college class when learning robotics?

NC (00:01:45):

Discrete mathematics, probably, from a computer science perspective.

CW (00:01:51):

If I was looking for a robotics grad program, what would be some things the various schools that I should look for?

NC (00:01:58):

That is a good question, because we just started a robotics grad program. Of course it is the faculty and it is critical mass. It is relationships with industry. I think mostly, cool research going on at the university that you are interested in.

EW (00:02:15):

All right. Well, I have so many robotic questions that I am actually going to cut this short.

CW (00:02:20):

Well, he has to do favorite fictional robot.

EW (00:02:22):

Okay. Right, right. What is your favorite fictional robot?

NC (00:02:25):

It is the T-1000.

CW (00:02:26):

<laugh>

NC (00:02:27):

Sorry, the T-800. Because in the nice version with the upgrade, you know, the Arnold Schwarzenegger, and he comes back Terminator 2.

CW (00:02:39):

"I will be back."

EW (00:02:40):

Okay. So you have a book called "Introduction to Autonomous Robots." Could you tell us a little bit about it? Like, who should read it?

NC (00:02:52):

Yeah. The book came about for teaching robotics to computer science students. I started it maybe 15 years ago, as in lecture notes, an open source book. Then a few years back, as our robotics faculty grew, my colleagues joined the effort and we then published it with a real publisher.

(00:03:18):

The goal is still to really provide everything computer science people need to know. Of course, other people who want to build robots are equally addressed. But the approach is very much for people who come from programming or want to program the robot, versus building robots from scratch.

CW (00:03:47):

I know this is a silly question, but can you define what a robot is?

EW (00:03:50):

<laugh>

CW (00:03:51):

Because I think- Well, lay people think C-3PO. Engineers think anything with a servo motor and code running. It is probably a mix of things. But what do you think? What is your definition of what a robot is?

NC (00:04:07):

You definitely know one when you see one.

CW (00:04:08):

<laugh>

EW (00:04:08):

<laugh>

NC (00:04:11):

Giving an equally mean answer, that has a lot of depth to it. But it is a difficult problem. In the class we usually start doing that by showing people different things, like an elevator, a coffee maker, an autonomous car, and also C-3PO-like things. Then we ask them, "Okay. What is a robot? What is not?"

(00:04:37):

It is very difficult to define that. It has to have a confluence of sensing and actuation. So it has to move, and that has to be wired in some kind of intelligent nontrivial manner. But then when you look at very simple robots, where the sensing is directly wired to the activation, I think they are also robots already, like insect-like things.

(00:05:03):

It is probably almost as difficult to ask, "What is life?" So, "Where does life start?" and, "What do you really need? What are the ingredients? Is it a certain complexity where that mechanism then becomes alive?" So it is a "Sorry," for digressing, but the robot question is equally difficult to answer.

CW (00:05:22):

Yeah.

EW (00:05:23):

Going back to your book, I like that it is for computer science folks. One of the things that I always have trouble with is the mechanicals of it. In your course, on Coursera, which is connected to the book? Is it connected to the book?

NC (00:05:44):

Yeah. The Coursera course follows the book.

EW (00:05:48):

You use the Webot simulator, which is sort of ROS 2 Gazebo made much, much, much easier. What made you decide to use Webots?

NC (00:06:02):

It is a scaling issue to teach the class to multiple people, to many people at once. We always used the simulator, we always used Webots. Sometimes I taught the class with- So, as a simulator, we never use Gazebo or anything else. But sometimes I try to teach the class using Sparki robots, and then every student could have a Sparki. So you can have maybe 60 students with 30 Sparkies, or something like this, and then they would share. But I also realized that it is becoming very much an embedded systems class at this point.

(00:06:36):

And when you actually use ROS, which we also have done with a Baxter robot, it becomes really a software engineering class. And the people who do not have those skills, embedded Linux skills, they basically fall through the cracks. So with Webots, you can really focus on the discrete algorithms that drive the robot in the simulator, and few lines of Python.

EW (00:07:02):

I like that about it. But defining the robot has been really difficult for me. I want to do a robot that is very different from what you have in your class, or is defined in Webots otherwise. But the mechanicals part is beyond me. How do I learn that?

NC (00:07:24):

You mean putting together the mechanics in the simulator? Like, starting from a wheel and adding 3D objects to create your own robot?

EW (00:07:34):

Yes, I can follow the instructions, but I cannot do it myself.

NC (00:07:38):

You could. I know what you mean. I let the students in the first exercise, lab session, follow the Webots tutorial to build some simple robots that drive. Later I want them to learn how the simulator works in the Scene Tree and things like this, and understand that there is a tree of components. Because then later they have to change the robots that are given to them.

(00:08:06):

I think it is just a matter of expectations. If you can put a humanoid robot together in 15 hours, then it is maybe 10,000 times faster than doing it for real. But you may be expecting to do it in ten minutes. So maybe that is the problem.

(00:08:27):

And then of course the physics are finicky. I have this one robot, I do not know if you have seen it, the one where there is a side spinning wheel that makes the robot turn once it goes off a cliff. Setting up the physics right, so that this actually works, so the wheel speeds and the friction coefficients, is quite difficult.

(00:08:53):

It requires a lot of tinkering in order to understand the intrinsic problems of the physics simulators, I think Bullet, that they use inside. So that also might create problems if you do that for the first time.

EW (00:09:09):

So you are telling me that it is hard. Well, that is irritating.

CW (00:09:14):

<laugh> Do you need to understand the physics to be able to- I would imagine that these robots get so complex that you are not going to sit down on a sheet of paper and solve analytically the equations of motion of the system. Or is that something that people should be considering, with maybe simpler robots, before going into the model?

NC (00:09:39):

Well, I think you need the equations of motion once you want to solve the inverse kinematics problem or even the forward kinematics. You want to know where things are, like when the wheels turn. But generally you do not have to build your own robot in Webots anyways. You just use one of the many that it comes with. I never had to do that really.

(00:10:04):

Maybe in the class it is a distraction in the beginning. But on the other hand, I want people to understand how the simulator works. So you do not have to be able to build your own. It is really just getting people to the point that they see- When they follow my curriculum in Coursera- I guess that is what, Elecia, what you refer to?

EW (00:10:25):

Yes.

NC (00:10:27):

Yeah. Then I really want them to understand how these components work, and that they are discrete elements like motors and sensors that you built this up with.

(00:10:39):

I think it is good feedback actually, that this is maybe an annoying hurdle. Maybe one should do easier things in the beginning. And once people like the class or like the content, then let them play with the simulator, or learn the simulator intrinsics.

EW (00:10:56):

Oh, it is not really a comment in the class. The class is paced well. It was just that I wanted the class, in order to be able to do what I wanted to do. But it is a Webots hurdle. It is not really a class hurdle. It is a Webots tutorial that I probably should figure out and then write.

NC (00:11:17):

Yeah. I mean any hurdle is a hurdle in the class. So I think it is good feedback. I appreciate it. I will use it.

EW (00:11:23):

So back to, you mentioned the model- You were talking about this wheel that is perpendicular to the normal- Okay. So there is a robot.

CW (00:11:34):

<laugh> Podcast visualization.

EW (00:11:35):

You have a really good description of this. The robot, behaviorally, it is just a little tiny windup toy robot. It goes to the edge and then it turns. It goes to the edge of the table it is on, and then it turns.

CW (00:11:50):

So it avoids falling off the edge.

EW (00:11:52):

It avoids falling off the edge.

CW (00:11:54):

Okay. I have not seen this.

EW (00:11:55):

But it is done entirely through sensing and actuation. There is no intelligence to it. Because when it is going forward and it starts to lean forward, it engages a different wheel that turns it to the side.

CW (00:12:08):

Okay. So it is just a little more sophisticated than those 1980s dog robots outside Spencer Gifts that run around. Okay.

EW (00:12:15):

It is exactly as sophisticated as the 1950s windup robots.

CW (00:12:19):

Okay. <laugh> All right.

EW (00:12:20):

That seems like the simplest version of a robot possible. According to your definition, it is a robot?

NC (00:12:29):

Yeah. That is why I show it to the students and ask them, "Do you think it is a robot?" I also asked them, "How would you build it? If you had to build it from scratch using your imagination, what kind of components would you envision? What kind of logic do you need? How could you go about it? Would you need a computer?"

(00:12:49):

I think then, that is already where the definition becomes really murky, because it is really just mechanistic. But then if you add electronics, then why is electronics any different than a mechanism? It is also just photons or electrons doing physical stuff. There is no algorithm there. But it is really a conversation starter also in the book.

(00:13:16):

But it also has another component to it, which I think you want to get to. Also is it shows how important the overall systems design is. Even if you have a computer in there, the geometry of the mechanism is super important for this to work. This will stick with the robotics curriculum throughout the book, or the class, or your roboticist's career.

EW (00:13:46):

Exactly. As I said, it is one of those areas that I find personally difficult. How do people learn that part?

NC (00:13:59):

I think seeing such a robot, primed for this, is very important. And then seeing examples throughout class. Or in commercial products like the Roomba or things like that. But it is very, very difficult to design and learn that as a design skill. I have not figured out how to do that, or how to teach that.

EW (00:14:25):

<laugh> Okay.

NC (00:14:26):

But I am also not very gifted at that part. I think if you study mechanical engineering then you would be exposed to many, many, many complex mechanisms. It is kind of a school of thought for that design process. But, yeah. Animals are of course another big source of inspiration in robotics historically, where people look [at how] outstand[ing] they are.

EW (00:14:52):

What are the concepts that most people have trouble with, when learning robotics?

NC (00:14:57):

The computer science students often have not learned what a finite-state machine is, for some reason, because that is something that is taught in computer architecture. They write code that runs from the top to the bottom, but they do not have a loop that goes into different states and then has state transitions. That is one part.

(00:15:26):

But then the other part is that the code does not run- The code just triggers what the robot does, and then physics takes its course. So while physics evolves, you have to take a backseat and wait until physics has happened. So you, for instance, time your loop or something, and then the robot has moved a little bit and then you can sense again. Wrapping their head around that is usually new to people, even if they have programmed a lot before.

CW (00:15:58):

Sure is a lot that goes into that too, because there are sampling rates, there is how fast your sensor updates, and how fast you can read your sensor, which might be their separate things. There is a lot that is just part of your system, that is related to physics, but on top of physics.

EW (00:16:14):

I feel like being an embedded systems engineer, I am used to working with the world.

CW (00:16:18):

Right. But-

EW (00:16:18):

But yes, I could see how that might be odd. I have another question. What is the most fun part to teach?

NC (00:16:27):

You mean the most fun part for me, or the most fun part for the students?

EW (00:16:32):

Well now I have two questions.

CW (00:16:32):

<laugh>

NC (00:16:32):

Okay. I like them getting excited about the robot moving, and the opportunities they get once they realize that they can very easily create illustrations and visualizations of actually something that they have complete control on.

(00:16:53):

Like the TIAGo robot going to the kitchen and picking up items. Something they probably did not expect that this can be accomplished by the end of a one semester class. That happens throughout the class, when they engage with the robots and see what happens there.

(00:17:14):

I guess that is also the most fun for them to do. I was a little reluctant because most of the students actually expect to work with real robots, and then are disappointed if they cannot. I think it is also difficult to appreciate what the simulator does for you, if you have not had that embedded systems robotic experience.

CW (00:17:37):

<laugh>

EW (00:17:43):

<laugh> They really wanted things to catch on fire.

CW (00:17:45):

No, not only that. It is like, "Okay, we are six weeks in, and you have managed to install your compiler correctly."

EW (00:17:51):

"Oh, it does not work. That is just a jiggly wire." <laugh>

CW (00:17:53):

Yeah.

NC (00:17:55):

No, yeah. No, I understand why you guys are laughing. Because that is exactly right. That is the embedded systems background that you are bringing, and they have no idea.

(00:18:03):

I tell them, "Look how it is so easy. Normally you have to take a ruler and put it on the table, and follow the robot and reset it and carry it through the room. And here you have to do none of this. Not to mention all this plugs becoming loose somewhere inside, or things heating up weirdly and then the behavior changing throughout." So yeah, I always tell them how lucky they are, but it is difficult to communicate that.

EW (00:18:31):

Just make them do the IMU calibration procedure, where they have to do the hard and soft iron calibration, and then all of the accelerometer. Do that with the 40 pound robot and it gets really heavy.

CW (00:18:42):

I have to do that with a 40 pound drone.

EW (00:18:44):

<laugh>

CW (00:18:44):

It is horrible. <laugh>

EW (00:18:44):

Yes. It is really weird for me to think about robotics being separate from embedded systems. Maybe that is just my background. Is that a common separation in academia?

NC (00:19:01):

Yeah, I think so. Because historically robotics or robotic textbooks are about dynamical systems, like the kinematics and then the dynamics of these linked chains of actuators. I do not teach dynamics, but only the kinematics and all this new stuff, I would call it, which is the planning aspects. There are all these algorithms, like from networking Dijkstra's where we do not have path planning, or a lot of probability theory that came in.

(00:19:36):

The question is how can you pack that all into one class, if you have to deal with the limitations of the hardware? So example, when we used Sparki, which is an Arduino based robot, they had to implement Dijkstra's in C on a four by four grid. So they do not really see how the path is planned, or the path planning is difficult.

(00:20:00):

Now they can have a real map that is collected by a laser scanner, and plan a map on that. You can push it much further if you disconnect it.

(00:20:10):

Because otherwise the TIAGo robot- I do not know how many computers are there, and how many sensors, and power routing. It is quite a complex system, which might take a few weeks of just learning how to use it properly. And then I cannot teach them all this stuff, and have the robot go through the room and pick up something from the kitchen sink at the kitchen counter.

EW (00:20:36):

Is it realistic to let the students think that this easy part of robotics <laugh> can happen without a real robot?

NC (00:20:49):

Yeah, it is a good question.

CW (00:20:51):

I would not call it easy.

EW (00:20:51):

I know. That was wrong. <laugh>

NC (00:20:54):

I think it is really- So at CU another big deal for them is to program it in Python. Before that they learned only C++ in our curriculum. So in the third year then some of them know Python, but it is the first time they really have to go through this.

(00:21:12):

So there are so many things that they do not know, and that they have to learn, and that this class is using to transform them into computer scientists, that we basically pick our battles.

(00:21:24):

I would say a lot of these things that you experienced the hard way, and they should experience if they want to go on to robotics, is not what all of the computer science graduates- I do not know how many percent, but I think a large percentage of our graduates, take the robotics class because it is some kind of the core curriculum.

(00:21:44):

I do not think these are things they have to know. It is really about finding a way to get those people, these computer science people, into the robotics field. Then keeping them there, and let them take the next class, which is advanced robotics.

(00:22:02):

That uses the Amazon Racer, which is one of those remote control cars which you can then make autonomous. There they start to get exposed to much more embedded system stuff. They have to log into that Linux box, they have to use ROS, Ubuntu command line. Things like power and cables coming loose becomes an issue. I do not think there really are sensors on their own.

(00:22:32):

But then they can also all work in the robotic labs, and become real roboticists. So it is really just one direction to come in, which is maybe very orthogonal to coming in from a VEX Robotics perspective, or from a Arduino based perspective.

EW (00:22:54):

But these algorithms are important, and understanding how matrix math really is very extremely useful in lots of things, but specifically in robotics, seems like a good course path.

NC (00:23:12):

Yeah. I already regretted when I said, "Discrete mathematics." The other class is linear algebra.

CW (00:23:16):

Yeah.

EW (00:23:16):

Yeah!

(00:23:18):

Maybe that is more important. I am not sure which one it is. It depends on what you want to do. I think you can probably get away without the matrix math, if you use high level AI planning. But then everything else you will need to live with it, especially computer vision and like that, or even machine learning. So maybe we should correct that answer. And so herewith do.

(00:24:00):

<music> You remember that episode on the "Big Bang Theory," when they suggested adding Bluetooth to a flower barrette to attract a male audience? Penny asked Sheldon, "Wait a minute. You want to make a hair barrette with Bluetooth?" To which Sheldon replied, "Penny, everything is better with Bluetooth." The right answer was actually, "Penny, everything is better with Nordic Semiconductor's Bluetooth."

(00:24:22):

Because with a 40% share in the Bluetooth Low Energy market, chances are you have at least a handful of Nordic devices in your home at the minimum. But it is not just about Bluetooth, because over the years Nordic has become a market leader in IoT connectivity.

(00:24:35):

Providing hardware, software, tools and services to create the IoT devices of the future. Across a wide technology portfolio that contains not only Bluetooth Low Energy, but also LE audio, Bluetooth Mesh, Thread, Matter, Cellular IoT, Wi-Fi and more.

(00:24:53):

And to thank you for listening, Nordic Semiconductor has a contest to give away some parts. Fill out the entrance form and you are in the running. That link is in the newsletter, and in the show notes. <music>

(00:25:13):

You have the book, you have a Coursera class. The videos all also seem to be on YouTube. Are they the same?

NC (00:25:23):

The same videos. Yeah, I uploaded everything to YouTube, for broadening participation.

EW (00:25:28):

Which one should I take? Why would I take the Coursera over YouTube? I guess they are both free?

NC (00:25:36):

Yeah, so Coursera is marketed as free. But then I think what happens if you want the credit, they charge you and you have to be a premium subscriber. But I think also Coursera changes things all the time, so I am not really sure what the status quo is.

(00:25:57):

I think it is a free class, if you do not want [to] download the certificate. And I think there are certain quizzes that are not available in the free version, which do not add much in terms of content. It is just an additional assessment.

(00:26:17):

I think the Coursera class is better, because it is actually when I made it, I made it without lectures. So it had all these mini videos that show examples in Webots, but then it has a lot of quizzes, which I call "guided exercises."

(00:26:40):

The idea is that you are told or asked to do something in Webots, and then I would ask a very simple question. That is easy to answer if you have the robot in front of you, and you can just press play and see what it does. But very difficult to answer if you do not. So the idea is that you really implement everything along as you go in Coursera.

(00:27:04):

On YouTube, these lectures just provide some content, but do not provide that self-guided activity.

EW (00:27:16):

Then I have to sign up for Coursera! Okay. And then the book, which is for sale on all the normal book sites, I can also compile in LaTeX.

NC (00:27:29):

Yes, that is right. It is on GitHub. The license is that we can make the source code available, and so people have to make their own PDF.

EW (00:27:39):

Yeah. That was harder than it seemed, but that is okay. So basically there are free versions of the whole class. How are you going to make any money from that?

NC (00:27:53):

Oh, I have a job.

EW (00:27:54):

<laugh>

NC (00:27:54):

That is not the primary driver. Actually, there is royalties for the book if people buy it. And there is royalties for the Coursera class, if people from outside of CU take it. I think there is some key on how this is split across the university and the people who created the content.

(00:28:20):

None of these money sources [are] really scalable. I feel like it is more part of the profession to see where things are going, and actively contributing to how the university is changing. That is why for instance, CU has engaged in this master's program in Coursera.

(00:28:49):

I do not know if you know that, but you can- If you pass three of the classes in the master's program, you get enrolled in the master's program, even if you do not have a bachelor's. So you can now just have a high school degree and enroll in a computer science master's program. I think it is actually indistinguishable from the on-campus experience.

(00:29:16):

Of course a lot of people ask questions and wonder, including our own faculty, whether that is the right thing to do, because there is also some kind of gatekeeping idea. I think what happens is now education is freely available and ubiquitous. We have to find new ways to make the university a platform, rather than a broker of knowledge, an exclusive broker.

(00:29:44):

Funny things happened. I think somebody took this master's degree, and then started working at CU in a research lab for money during the summer. So it creates completely new avenues for people to do their career, and engage with CU. People who would never have come to CU as a student, are now getting a degree there. And interacting with people in various ways, and then moving on with their careers.

(00:30:12):

So I think that is part of the job, to explore. And we have awards. I got an award for the book, and I got an award for the open source activity. The university libraries are pushing that, or the engineering school, and rewarding professors for being proactive in developing materials. Instead of just using other people's work to educate.

EW (00:30:41):

I am unfamiliar with the University of Colorado's master's program. I am familiar with the Georgia Tech computer science master program that is all online, and reasonably priced compared to doing it in person. So what you are saying is that you have an entirely online master's degree in computer science?

NC (00:31:06):

Yeah, I think it is actually the same as the Georgia Tech one. Except we have this entry point that is different, which is a mastery based axis. But we also came after Georgia Tech. I think the Georgia Tech one is very well regarded in industry. So we are trying to compete in this space, I guess.

EW (00:31:30):

Can you imagine going back? I think we are about the same age, given when you graduated and all of that. Can you imagine if this was possible, and you could live anywhere and get a master's degree and start your career that way? It just seems so odd to me.

NC (00:31:54):

I think it is very odd. I think five years ago you could not have done that. Technically was not possible. We always had remote universities, by the way, in Germany where you could get a remote education and offerings like this always existed.

(00:32:12):

But at that scale that you can learn so many things by yourself, by the way without getting the degree, I think that is really the novelty. That you can set your goals to anything, and learn it from the internet using YouTube videos and ChatGPT in particular, I think that is really the novelty. That is where we have to then say, "By the way, we also sell degrees. Do you really need one?" And think about what that means.

EW (00:32:49):

<laugh> Do you really need one?

CW (00:32:50):

I do think that applies to certain kinds of people though. Because you are asking, Elecia, "Can you imagine doing that, when we were in school?" No! Because I would never have finished. I would have goofed off and looked at videos.

(00:33:04):

I am not temperamentally set up for that. I need to have people in my face in person, or some sort of accountability. I do not think I could do a thing like that, without just failing out of it. <laugh>

NC (00:33:24):

Yeah. I have the greatest respect for the people who actually do the master's degree. I often write them letters. I think they are just so much more capable than- I should not put it this way, but-

CW (00:33:35):

Yeah! <laugh>

NC (00:33:35):

They are extremely heavily self-motivated. That goes- It is not what the on-campus students often are, because they behave as you described.

(00:33:46):

There is a funny other aspect. If you really want a job and you want the money, then you will probably do it. I had a student come to me recently. He asked me if he should stay on for the master's degree. I then asked him what his goals are and what his grades are. Because maybe he has very poor grades, so then he gets the master's degree and then he has finally good grade that he can show.

(00:34:12):

No, he had 4.0. Then I asked him his goals. He did not really know. He wanted to learn more to be proficient in a specific industry. I then told him that his goal should be to get a job, because that is why he takes the degree.

(00:34:29):

And now if he cannot get a job, because of the economy or his other limitations of his GPA or knowledge, he wants to get a different job, then yes, go get the master's degree. But I feel like sometimes the students forget that they want the job.

(00:34:49):

With this new world of online education, I can look what are the jobs that I want, and I can try to get that knowledge and then I can demonstrate it.

(00:34:57):

I had a student once- Actually, he was a student and he worked in my company. He wanted a job at Rivian. They did not give it to him, because he did not have certain qualification, I think with OpenGL. He spent the weekend learning OpenGL, and programmed this Rivian car driving in some kind of muddy something. He called that recruiter and said, "Look, I know OpenGL now. Can I have the job?" And he did get the job.

(00:35:35):

That was when? Five years ago, maybe three, something like that, when Rivian was hot, new. I did not realize how easy it was. Not that it was easy for him, but it was possible to learn all of this and find YouTube examples. At the time, I have not learned from YouTube yet. I was a textbook kind of guy.

(00:35:58):

If I would have done this, I would have ordered an OpenGL book from- What this thing with the animals on? O'Reilly.

EW (00:36:08):

<laugh> O'Reilly.

NC (00:36:09):

And I read the entire thing, and then I- But this younger generation, he is much more result oriented. He wanted to do this, so he found some example online and he cobbled stuff together. He got much quicker results than I would have gotten. That is why I said, "He wanted the job." He wanted the money that comes with it, which is much more than his- Oh, he had a mechanical engineering degree too. They are paid less than the CS people.

(00:36:47):

I think, Chris you would be the same if you were motivated to get the job. Now I can ask you why you were not. We could go into this. But it is also developmental stage, where the university serves as a environment for personal growth. That is a very important aspect too.

CW (00:37:15):

I think that was very important for me, because I went from a place where in high school everything was somewhat easy and I did very well. And then as soon as I got to college, I suddenly realized that I was mid tier to lower quarter tier of this class, and that did not feel right and things were very difficult. It took me a couple of years to figure out how to apply myself, and how to be motivated and things like that. It is an age thing.

(00:37:43):

Now I can learn from YouTube videos, and I can learn from online classes. And I did a master's degree when I was an adult, and that was self-motivating and stuff like that. But at 18 to 20, that would have been very difficult. But I also did not grow up with YouTube, so maybe I would be a different person.

EW (00:38:02):

But I still have trouble watching things on YouTube. I do so much better with the textbook. How did you switch that?

NC (00:38:10):

Yeah, I think it was actually also that student who introduced me to that idea. When I bought books, he was watching videos. I think you need really good stuff. I like these "makemore" from Andrej Karpathy, for instance. That opened the world for me to transformer neural networks. So I think you have to get to the right content.

(00:38:41):

But normally to me it is also too slow. It is a pacing thing. The book, you can pace yourself. The YouTube video- You can also set it to 2x. Sometimes you have to do that and power through. Again, I think the content has become so much better on YouTube on video formats, than you could find on books. I think that is really what made the switch for me.

CW (00:39:15):

There are a lot more examples, and seeing things happen.

EW (00:39:18):

And with the robotics, it is actually really useful. Because I read about the robot that Nikolaus is talking about, with the two wheels and the one perpendicular and it does not fall off the table. In the book it talks about it and you are like, "Okay." And then when you see it in the video, it is like, "Ohh! That is what he meant." There are times where motion matters.

NC (00:39:43):

Yeah, the podcast audience probably already quite irate, that they could not see the robot that we all have seen and scratched.

CW (00:39:49):

<laugh>

EW (00:39:50):

Oh, some of them have already clicked the buttons to look it up. Others have just cursed us on their drive and said, "I will never find it again!"

CW (00:40:00):

<laugh> That is fine. We have had much worse visual descriptions on this podcast.

EW (00:40:04):

<laugh>

CW (00:40:04):

Origami folding takes the cake. <laugh>

EW (00:40:12):

What kind of research do you do?

NC (00:40:16):

Right now we are very interested in these ChatGPT-like open world models, and how they affect robotics and how we can use them to make robots smarter. To make that more clear, you can now upload pictures to ChatGPT. You can take a photo of your kitchen table and ask ChatGPT, "What are the things I need to do to clean the table? Make a list." Or tell it I am a robot, whatever. It would then enumerate the things.

(00:40:48):

You get quite a high level of common sense, and almost general intelligence, to interact with the world. The research is not just using ChatGPT in that way, but also to train models that are like or using the same underlying techniques to do that.

(00:41:14):

Humanoid robotics is something that is also coming up. I just about to order my first humanoid robot. Then I hope we can do things that were not conceivable ten years ago, or five years ago even.

EW (00:41:34):

For ChatGPT and robotics, I have in my metaphorical vision a Roomba-looking robot that has a pen and is the Logo turtle robot from many years ago. And it talks to ChatGPT and says, "I want to draw an owl, and I know how to go left, right." And it lists out the turtle Logo instructions. Is that what you are talking about? Simplifying what the robot wants to do, versus what it knows how to do.

NC (00:42:11):

Yeah. That is definitely a great example, because it actually pulls in this extremely abstract knowledge about what an owl is. What we can do now if you want, we could go to ChatGPT and ask for the owl ASCII art. To make an ASCII art, like using characters, and to see if that would even work. But I guess it would. I guess it could be done relatively easily.

(00:42:37):

Now you have to just provide, as you said, the API. That is like the prompt engineering you have to do, and then copy and paste the output, and it will work. So-

EW (00:42:54):

Will it?

NC (00:42:55):

That is quite incredible. Yeah.

EW (00:42:57):

ChatGPT is... Uh. Hmm. Does not always work.

NC (00:43:03):

The limitation might be that the number of instructions are quite long for this task, and it does not like to spit out too much information. But I think you can get to some kind of version of this.

(00:43:19):

I just saw a paper from Berkeley, Ken Goldberg's group, who showed the robot 3D puzzle pieces like cubes and triangles and then asked, "Make me giraffe." And then the robot would pick those items and build the giraffe, or whatever, like animals of that kind.

(00:43:44):

There is some more engineering to that. But in general it uses the vision language abilities of these models, and demonstrates that things like that can be done. So that is very similar to your drawing example, it is just in 3D and involves blocks.

CW (00:44:04):

Let me go back to you dropped that you have ordered a humanoid robot.

EW (00:44:10):

<laugh> You dropped that you have ordered a human.

CW (00:44:11):

Right. Sorry.

EW (00:44:11):

Not so good.

CW (00:44:15):

Why- This is a question we have asked other robotics adjacent people on this podcast, in various forms. Why or is humanoid form important?

NC (00:44:27):

Yeah, great question. I spend a lot of time in manufacturing, trying to sell robots to them. They often discard them. As you might know, these robot startups are often not successful.

(00:44:40):

Be it arms or be it mobility, they get sent back. When you then ask, "Why do you not want it anymore?" They said, "Does not work." Then you say, "No, it does work. Moves as it should." And they say, "No."

(00:44:56):

I think what happens often is that the robots disrupt the flow. They cannot work faster or slower. They basically enforce a certain takt time, which is the musical word which you use in manufacturing for how the pace at which things move. I think humans are able to adjust better and adapt better. So it is very difficult to integrate. You have some existing setup with humans that do stuff, and machines, and then when you bring in new machines, they disrupt the flow of things.

(00:45:38):

So I think humanoids are the best bet to create something that very seamlessly integrates. I have to give you a longer answer. I can give you a second part to this answer, another example. I often use this Bialetti espresso maker from Italy. It is that silver can that you have to screw on and off and put the powder in, and then you set it on a heat plate. Do you know that?

CW (00:46:10):

Oh, yeah.

EW (00:46:10):

Mm-hmm.

CW (00:46:10):

Yeah. Like a camping espresso thing, that kind of-

NC (00:46:14):

Yeah, the Italians actually they live with it. But that is fine.

EW (00:46:19):

<laugh> "This is how it should be done. It is not a camping thing."

CW (00:46:21):

Sorry. Sorry.

NC (00:46:25):

Let us say you have a factory and it has these kind of things, the heat plate and the grinder and this. So you go to a roboticist and say, "Hey, please automate that for me." And then for $150,000 you could probably think of something which involves a couple of robot arms and careful jigs that make that work. I think it would be very brittle.

(00:46:52):

Now the alternative is an espresso maker, where you press a button. That espresso maker actually has the same configuration inside. Everything is the same. It heats water and then it has this sieve and the water gets shot through that. So you basically mimic this entire mechanism. But you can get that done for $300, or $500.

(00:47:21):

That is one problem. It is called greenfield versus brownfield, where you have a brownfield installation that you want to retrofit, versus you can discard that stuff and build something new on the greenfield.

(00:47:35):

We have not really talked about that the robot installation would not really work. You can see how you have universal robot arms, and there would be problems. And problems all the time, that something would get stuck or dirty or fall out.

(00:47:49):

So I think really the only way to deal with this brownfield problem, is a humanoid which is as good as a human. Because that is really the only thing that comes with all of the tools or, let us say, I do not know what it is, form factor and functionality to actually replace the human who has previously operated that existing brownfield installation. And so that is the answer.

CW (00:48:18):

Because we have adapted all of our environments to ourselves, necessarily. So it is much easier to stick a human-shape thing in a human-shape hole. <laugh>

NC (00:48:29):

Yeah. Maybe that would be the simple answer, but I do not think people buy that answer anymore. I think they need better concrete examples like that. Of course it is a made up example too. You are right. That is the summary. People do not want to change their environments and their set up. Very difficult to make even small changes in a factory.

CW (00:48:55):

See, I thought you were going to say, "People would not want to send back something that looks like a human." <laugh>

EW (00:48:59):

Hmm.

NC (00:49:02):

It is a nice one.

EW (00:49:03):

It is funny how much I disagree with this.

NC (00:49:06):

<laugh>

EW (00:49:09):

I did not really expect to have such a strong opinion. But I am just like, "No!"

CW (00:49:16):

It is Sunday. You cannot have a strong opinion on a Sunday.

NC (00:49:19):

I would use that one, Chris. I like it. It is much easier. I just say it without a smile, just very serious.

EW (00:49:27):

Bartender! If I was building a bartender robot, I would not want a humanoid. I would want something with wheels, instead of legs. And I would want four arms, or eight arms. I would not want a humanoid, even though it is a humanoid that does that job. I want something designed for purpose, and not designed for the flexibility that is our feeble humanness.

CW (00:49:55):

But the bar already exists, as it is. That is the problem. The bar-

EW (00:50:01):

Right. And my robot does not break that in any way, with wheels.

CW (00:50:04):

You have not designed it yet. <laugh>

EW (00:50:06):

With wheels and forearms.

NC (00:50:07):

There are many bartending robot startups. Like, cocktail mixing machines.

CW (00:50:13):

Yeah. Yeah, yeah. But that is a greenfield, right? They have got all the things in there, like an espresso machine.

EW (00:50:17):

Yeah. But my robot could do the whole flair cocktail thing.

CW (00:50:20):

Well then let us do that startup.

EW (00:50:21):

<laugh>

NC (00:50:24):

So I think there is a problem, that you have only a marginal value prop.

EW (00:50:28):

Yeah.

CW (00:50:28):

Yeah. <laugh> Right.

NC (00:50:29):

And then it does not justify all of this mechatronic investment. The humanoid can really go and do multiple things. The problem with this is, also my example only works if the humanoid is so good that it could do all of this dexterity-

CW (00:50:47):

Right. Right. We are just kind of sliding over that. Yeah. <laugh>

NC (00:50:50):

Once you do not have that anymore, then you would exponentially lose value. That is what we are seeing right now. When BMW tries the Figure humanoid robot, and the problem is if they cannot deliver or raise the bar of value, then this will fall through. Then we have a failed bubble, which would be a pity.

EW (00:51:17):

One of the other problems with humanoid robots, or my bar robot, is that robots are inherently not safe. They are usually stronger than humans, and they do not really have any compunction against hitting us, if that is what their programming tells it to do. And so in manufacturing we put them in cages, or paint lines around them, and tell the humans not to go in there.

CW (00:51:44):

Put blinking lights and orange paint on them. Yeah.

EW (00:51:48):

Yeah. We make them brightly colored, so that people do not hurt themselves, or get hurt by the robot. How would your humanoid robot prevent that? Or you are going to make it squishy?

NC (00:51:59):

Yeah, that is a great question. All of these things, squishy and collaborative robots, that exists. The key in collaborative robots is that the robot measures its own joint torques, and as soon as it exceeds what it thinks it should exert, it would stop.

(00:52:16):

Before, you had robots that could lift a hundred kilograms, and so if you get in the way, they would just move right through the obstacle or you. These new collaborative robots, they could sense that and then stop. Of course once you move them faster, then they whip you again, so it does not work.

(00:52:38):

I think one really has to make the robots behave safely, and in the worst case just stop moving if people approach. That is of course a problem if you think care situations. But I think there are lots of applications where the robot actually does not have to do anything with humans.

(00:53:02):

So if the robot would do something in a manufacturing environment, or cleaning the kitchen, then it is also safe that it gets into a less aggressive motion mode, that is moving slower, as people approach. Then the assumption is that when contact is made, the robot can tell. It can tell already from monitoring its own joint torques.

(00:53:25):

And then there is of course this whole idea of adding skins to the robot.

EW (00:53:30):

And clothes.

NC (00:53:33):

There is some interesting- What is up?

EW (00:53:36):

And clothing. Sorry, go ahead. Skins.

NC (00:53:39):

Yeah, I am actually really bullish on the idea of making robot clothing. Like with the humanoids, like jerseys and things that are functionalized and provide them with- Because the textile industry is very well understood, very well set up. There are lots of ideas for variables out there to functionalize clothing. You start to add another layer of safety and also aesthetic appeal.

EW (00:54:09):

I was kidding. Robots should not have clothes.

CW (00:54:11):

Yeah, sure.

EW (00:54:13):

<laugh>

NC (00:54:14):

Robots are people too.

CW (00:54:15):

<laugh>

NC (00:54:15):

You cannot just fire them.

EW (00:54:21):

You also mentioned before recording, that you are doing battery disassembly with robots. Could you describe that?

NC (00:54:28):

Yeah. I just got a new grant starting on January 1st, for battery disassembly of lithium ion batteries. A lot of first generation electric vehicles come to end of life soon, and there will be many more.

(00:54:48):

At the same time, lithium and other rare earths are a limited resource. So there is great interest in dismantling these batteries and recycling them. Or remanufacturing them by exchanging individual cells and modules. But then at the end also to extract these raw materials again.

(00:55:12):

I propose to do that with humanoids. The hope is also to use this as a case study for productive use of humanoids, because it meets this dull, dirty and dangerous paradigm that we like in robotics. It is a better idea than making a robotic coffee maker that operates.

(00:55:32):

That we are letting machine where people can really see, "Oh, I do not want a human to deal with 400 volt battery or 800 volt battery. I had rather not be close to that battery when it catches fire, because I drilled through the-" If you drill into it or something, it can just spontaneously combust, and you cannot even put it out. So it is very dangerous, and I think there is a big driver for having humanoids do that.

(00:56:02):

Now you could also have- Many people have done this with static manipulators. But the problem is you have to be able to do this for possibly hundreds of different battery types.

(00:56:15):

And maybe also do this at places where the batteries are being delivered, and not move them across the country first. Because that not only costs a lot of money, it is also very dangerous again, because of this spontaneous combustion risk or short circuits that come when you put it in the back of a truck. So I hope that this will also drive a little bit the humanoid use case.

EW (00:56:46):

So instead of having people send all of their batteries to someplace in, well let us be realistic, Nevada, you would send these robots to various centers probably on the outskirts of town. They would disassemble the lithium ion batteries there, because the robots are humanoid and can move and can be flexible enough to disassemble many different kinds of lithium ion batteries. Am I understanding properly?

NC (00:57:18):

Yeah, that was correct. There is lots of stuff to unpack of course. One is this very many different kinds of things is again like a ChatGPT idea, like this open world abilities of these generative models.

(00:57:32):

Multimodal generative models will help you to identify screws and balls and things in a more generic way. So even if you have not seen that type of battery before, you might be able to disassemble some of it, or ask a human worker how to do it.

(00:57:47):

But also the humanoid would provide the mobility that is allowing the robot to move around the battery, because sometimes they are bigger. You would need quite a big installation that is statically and- Like get a big coffee machine if you want espresso maker that sits somewhere and costs hundreds of thousands of dollars, versus having a more compact version, that can use standard tools that people are currently using.

(00:58:19):

Of course this assembly is not going to be completely autonomous in the beginning, but it must be somehow working together with the human worker. So let us say the humanoid could only do the things until the battery is safe. Or, for example, opening up the case and probing what is inside, and then the human can take over.

(00:58:43):

Because it is a fluent process where you automated step-by-step, I think that is why the humanoid form factor again is important, so it can integrate with the existing places. Because people already do this for ten years now, to dismantle these batteries and sort them. And send them to these big plants in Nevada and other states, but send other pieces elsewhere. So it is already happening, and it is about that market, to help them doing it better.

EW (00:59:16):

You call them "humanoid robots," instead of "androids."

CW (00:59:18):

Hmm.

EW (00:59:18):

Why?

NC (00:59:21):

"Android" is an operating system from Google now, is it not?

CW (00:59:23):

<laugh>

EW (00:59:24):

Okay, that is fair. Totally fair. Yeah. Now that you say that, it does sound familiar. Yep. Yep.

CW (00:59:29):

I do not think they should be allowed to have that one.

EW (00:59:31):

<laugh>

NC (00:59:33):

Yeah, I think maybe is it also Latin and Greek? No? Is "andros" means "man" and "human"?

CW (00:59:40):

Ah. That is true.

NC (00:59:42):

I usually use- I do not use the word "android" for- Just culturally it has not- Are you using "android" over "humanoid"?

EW (00:59:54):

No, I use an iPhone.

CW (00:59:55):

Nope.

EW (00:59:55):

<laugh>

NC (00:59:55):

<laugh>

CW (00:59:58):

You know, you are very funny.

EW (01:00:01):

<laugh> No. I do not. I-

CW (01:00:03):

I tend to use "robot."

EW (01:00:04):

I tend to use "robot."

CW (01:00:05):

Which is not specific enough. But, yeah.

EW (01:00:08):

I read a lot of science fiction, and it seems like that terminology has been changing. It did not occur to me that it was because of Google, and their misuse of their operating system.

CW (01:00:23):

Google and gender specific. I think that was what you are saying, right?

EW (01:00:29):

I understand that. But I guess it does not- That one does not bother.

CW (01:00:34):

You could always use "droid," but then I would be sued by somebody else.

EW (01:00:37):

Yeah, exactly.

CW (01:00:37):

Disney. <laugh>

EW (01:00:40):

One more question I have for you, if you have just a couple more minutes?

NC (01:00:44):

Sure.

EW (01:00:46):

You do art. What kind of art do you do? And how do you find time for it?

NC (01:00:54):

I have not done it in a long time. That answers the second part of your question. But there are always- I think most of the robotic projects are art projects, because you are not use-oriented. You are not solving anybody's problem, but you are pushing the envelope.

(01:01:15):

If you are too wild about that, then people will get back to you and stop funding that. And say, "No. Why do you want to do this? What is the point?" You say, "No, it is just cool. I want to try that. I want to make robotic sand or I want to make robot swarms." If that is your problem, then maybe "art" is the better label.

(01:01:45):

I met once a colleague, Michael Theodore, and he had very similar interests than I, in terms of swarming. And how it can be that you have atoms that have physical intercourse, like it is very simple to describe. But then they create molecules and then cells. And then you have smart things or a bee swarm or galaxies. You have these scale free phenomena, where these physical basic interactions turn into these complex systems.

(01:02:22):

He was equally curious about this as I was, and he wanted to explore it with art. So we worked together and we built this swarm wall, which was a wall of swarming slinkies that were just driven by several motors, move across a surface that he designed.

(01:02:42):

The behavior of that system then later was refined by a visiting professor, Ken Sugawara, who has been studying swarming and writing about the equations of swarming. What happens is you attribute more to these things than actually happen. So you think that thing has emotions or is somewhat intelligent, because of the complexity that you get from these interactions.

(01:03:12):

So that was a fine example of why I would do art and how I would find the time. Because it is just my main pursuit. We also got money for that even, to build the whole system.

(01:03:27):

Then at the same time, there are sometimes just artifacts of the robots that I find interesting, beyond the publication value that they have. I then like to connect this to other art, where people explore similar phenomena. So I really like these synchronization and desynchronization things, which also happen in the swarming systems. They happen in the swarm wall.

(01:03:56):

Then I like to sometimes play with these and see what happens, and then also connect it to music that is made in that way, and things like this. It is not very exciting for other people, but it is very exciting to me. I think that is probably the definition of art, by the way.

EW (01:04:17):

<laugh> It is a different definition that I am used to, but I like it a lot.

NC (01:04:19):

<laugh>

EW (01:04:19):

Nikolaus, it has been wonderful to talk to you. Do you have any thoughts you would like to leave us with?

NC (01:04:28):

No! Um. Um. Keep on building, I think. Keep on blogging, and try to educate yourself and others. I think that is what makes the world go round.

EW (01:04:41):

Our guest has been Nikolaus Correll, Professor of Computer Science at the University of Colorado, and author of "Introduction to Autonomous Robots." There will be links in the show notes to his Coursera course and some of the robots we have talked to.

CW (01:04:57):

Thanks Nikolaus.

NC (01:04:57):

Thank you Chris. Thank you Elecia. Bye-bye.

EW (01:05:02):

Thank you to Christopher for producing and co-hosting. Thank you to Nordic for sponsoring the show. And of course, thank you for listening. You can always contact us at show@embedded.fm or hit the contact link on embedded.fm.

(01:05:13):

And now a thought to leave you with. "Should robots be humanoid?"