519: The Password Is All Zeros

Transcript from 519: The Password Is All Zeros with Mark Omo, James Rowley, Christopher White, and Elecia White.

EW (00:00:07):

Welcome to Embedded. I am Elecia White, alongside Christopher White. This week we are going to talk to safe crackers, Mark Omo and James Rowley.

CW (00:00:19):

Hello Mark. Hello James. I am excited to hear about my new life of crime.

MO (00:00:22):

Hey.

JR (00:00:22):

Hello.

MO (00:00:22):

Glad to be here.

EW (00:00:27):

Mark, could you tell us about yourself, as if we met at Supercon lunch?

MO (00:00:34):

Oh, as if I was the kind of person to introduce myself. Yeah, so my name is Mark. I am the director of engineering at Marcus Engineering. I work on military, medical and aerospace devices. I also do lots of cool embedded security stuff for those as well.

EW (00:00:52):

And James, we have never met. Could you introduce yourself?

JR (00:00:57):

It is true. Well, I am James Rowley. I always kind of cursed myself to being a jack of all trades. But these days I mostly do embedded software engineering and embedded software reverse engineering.

CW (00:01:11):

Do you sometimes just go back and forth, like you engineer something and then you "unengineer" it?

JR (00:01:16):

I have never had to yet reverse engineer something that I have done. Although I guess that is a good safeguard against losing the source code or something.

CW (00:01:25):

I do hit the backspace key pretty often when I am coding. I am not sure if that counts.

MO (00:01:29):

<laugh>

JR (00:01:29):

<laugh>

CW (00:01:29):

All right.

JR (00:01:31):

That is backwards engineering.

EW (00:01:34):

Mark, do you want to do a statement about only talking for yourselves, instead of other people?

CW (00:01:42):

Or organizations?

MO (00:01:44):

Yeah. All the work we did on this stuff was all personal from James and I. It was not affiliated with Marcus Engineering, the company that we work for.

EW (00:01:55):

And now lightning round. Are you ready?

MO (00:01:58):

I was born ready.

JR (00:01:58):

Ready!

CW (00:01:58):

What is the combination for your safe?

MO (00:02:02):

It is actually-

JR (00:02:04):

I am not telling you.

MO (00:02:05):

Oh.

CW (00:02:05):

Oh.

MO (00:02:06):

It is actually four, one, two, three, four right now, but.

EW (00:02:10):

<laugh> I wonder if they have the same safe.

CW (00:02:13):

Oh. All right, well a follow up-

EW (00:02:14):

And what James is keeping in there.

CW (00:02:15):

The follow up was where do you keep your safe? Just, but, I guess it does not. Yeah. All right.

EW (00:02:22):

What is your favorite human "Murderbot" character?

MO (00:02:27):

Oh, my favorite human "Murderbot" character. Oh, that is a good one. It would definitely be- Now I am blanking on the name. Not Gareth. Garth.

CW (00:02:39):

Oh, Gurathin.

EW (00:02:40):

Gurathin.

MO (00:02:40):

Gurathin.

CW (00:02:40):

There you go. Thank you. Yeah.

MO (00:02:43):

Yeah.

CW (00:02:45):

Yes. He is not a nice fellow.

EW (00:02:45):

James, are you passing, or are you?

JR (00:02:47):

I have not watched "Murderbot." I have heard about it. I have heard it is good.

EW (00:02:50):

Watched? No! <laugh>

CW (00:02:50):

Well, it has been a good show. Thanks for joining us.

MO (00:02:53):

<laugh>

JR (00:02:53):

Sorry!

CW (00:02:59):

Next question. Which of these is not a real famous safe cractor? Safe cractor? Safe tractor? No. Safe cracker. Fredericka Mandelbaum, Johnny Ramensky, Linus Yale Jr. or John Bridger?

EW (00:03:14):

That question is impossible.

CW (00:03:16):

That is why I put it in here.

JR (00:03:18):

I am going to guess John Bridger. That is the one that sounds least like someone who would be cracking a safe.

CW (00:03:24):

Mark, are you going to agree with that, or go with something else?

MO (00:03:25):

I do not know. You set that and I was like, "Oh man. I got to go look into the history of safe cracking."

CW (00:03:31):

John Bridger is correct.

JR (00:03:33):

Hey!

CW (00:03:33):

He was the safe cracker in "The Italian Job." The other three were real safe crackers in history.

JR (00:03:38):

It was a lucky guess.

EW (00:03:41):

What is your favorite processor family? Microprocessor family.

JR (00:03:47):

I can answer that. PIC18. I love the PIC18.

EW (00:03:50):

Ah, really.

JR (00:03:50):

It is so simple.

CW (00:03:51):

All right. Well, it was a great podcast, and thank you for joining us.

EW (00:03:55):

<laugh>

JR (00:03:55):

It is familiarity. That is what I have had to write the most machine code for. Maybe it is Stockholm syndrome.

CW (00:04:05):

You have to use that IDE.

EW (00:04:10):

Mark, are you going to agree with that? Or?

MO (00:04:13):

Oh, I am definitely a PIC32 person. It is-

JR (00:04:17):

Well, MIPS or ARM?

MO (00:04:19):

MIPS or ARM? Oh man, you got to go with the original MIPS. It is not really PIC32 if it is ARM. That is just Cortex-M0 in a jacket.

EW (00:04:27):

Exactly. The SAM processors are not PICs. I do not care what they say.

JR (00:04:33):

I got fooled into that when they first came out. I got the PIC32CM and I was like, "Ooh, it is a new product. No."

CW (00:04:41):

All right then, what is your least favorite processor?

MO (00:04:42):

The one in the ESP32. That one has a lot of-

JR (00:04:49):

Oh yeah.

MO (00:04:49):

Thoughtful issues.

CW (00:04:50):

<laugh>

EW (00:04:54):

<laugh> It is amazing it is so popular.

MO (00:04:55):

Xtensa.

JR (00:04:57):

Xtensa is kind of weird. I do not know that much about it.

CW (00:05:00):

Hit a weird spot in the market, in a weird time. It was the thing with Wi-Fi that was easy, sort of easy to do, right? Yeah, once you have got a market, people keep coming back to you. Like with PIC.

EW (00:05:15):

Complete one project or start a dozen?

JR (00:05:18):

Oh, definitely both. I like to start a whole bunch, and then farm for the one that I actually get done.

MO (00:05:26):

I agree with that. It is kind of the shotgun approach. See which one blossoms.

CW (00:05:32):

Favorite fictional robot?

MO (00:05:33):

Definitely got to be Baymax. I absolutely love the human interface design of the inflatable robot.

JR (00:05:43):

I would say- I forget its name, but the first robot you meet in the video game, "Soma." It is an experience.

CW (00:05:53):

I am not familiar with that game.

JR (00:05:55):

Very good game.

CW (00:05:56):

2015. Oh, okay.

EW (00:05:58):

And now, a tip everyone should know?

MO (00:06:01):

I always like to say, do not be afraid to reach out and ask people about the things they are passionate about. They are probably just as excited to talk to you, as you are to them.

EW (00:06:11):

I had a phone call this morning. He asked me about something. I realized five minutes later I was still burbling, and I did not know how to stop. Then I just abruptly stopped and he was like, "Oh no, I was interested." And I think, "Oh, okay. Okay. Yeah, yeah."

MO (00:06:28):

That is always great.

CW (00:06:30):

You ever talk so much your hands get tingly, because you are not breathing right? Sorry, next.

EW (00:06:35):

James, do you have a tip everyone should know?

JR (00:06:37):

I have a tip, which is when you are trying to do something, either trying to build something or reverse engineer or something, you need to believe that it can be done. Or that you can find what you are looking for. Because if you go into it skeptical of yourself, I think it is much more likely to give up prematurely.

(00:06:57):

That is how when we do things like these safes, to believe we are going to find something, and in this case we did.

EW (00:07:06):

Okay. That was actually a really good tip. I feel like James did not get the memo.

CW (00:07:10):

His was fine.

EW (00:07:12):

But it was really good that the whole believe you can do it is really important, especially for reverse engineering.

CW (00:07:18):

Yeah, but that does not apply to me. I do not believe I could do anything.

EW (00:07:22):

A few months ago I heard that the two of you gave a talk at DEF CON, with multiple demos on a large stage. You called it "Cash, drugs and guns: Why your safes are not safe," which is the most pandering title I could have imagined.

CW (00:07:45):

<laugh>

EW (00:07:45):

Did you actually go to DEF CON and think, "What could we do to make this crowd go nuts"?

JR (00:07:54):

That title was created in a lab. That was- <laugh> Yes.

MO (00:08:01):

Definitely there was some creating the title first, and then backing it up with actual applications by browsing the internet for random safes with these locks.

JR (00:08:12):

That is half true. We definitely- We were thinking about, "This is actually bad if somebody gets into a gun safe, because of this."

CW (00:08:19):

Right.

MO (00:08:21):

But yeah, and then we found out they are super popular on the pharmacy safes, because they are the cheapest locks that meet all the certification requirements. So all the industrial suppliers use these by default in all their safes. That was another fun experience to find out.

EW (00:08:41):

Okay. But we should step back. There exists locks that go into safes. Many different safes use one particular type of lock. Are we saying the company name? I guess we have to. SECURAM. This lock is easily crackable with some small/large amount of embedded software experience?

JR (00:09:08):

I think that is an interesting way to put it. This is something that we argued with SECURAM a little bit on was like, "How practical is this? How easy is this?"

(00:09:20):

The way that they framed that, and of course I am paraphrasing, is, "You have to spend hundreds of hours and be an embedded security expert, and have all these special tools and lab equipment and stuff, in order to do this." Which is in a sense true.

(00:09:35):

But also once the tool has been created, it has been created. Then it takes, I think, I did it on stage in maybe 30 seconds.

EW (00:09:46):

It was fast. There were two different exploits you talked about in the presentation. Could you describe them?

JR (00:09:56):

Sure. So the two exploits we talked about, we called the first one- Well, I do not know what order we went in. But "code snatch," which is a physical electronic tool that goes up through the battery port and hooks onto a debug port inside the part of the lock that is on the outside of the safe, the keypad.

(00:10:18):

It reads the super code. That is the code of the highest level permissions, out from the keypad, because it is in fact stored in the keypad.

(00:10:27):

The other exploit we called "reset heist." These locks have a procedure that you can do on them, where you put it into a particular mode. Then you call the OEM, you give it a code that shows up on the screen, they give you a code you type back in, and it resets all the codes to the default.

(00:10:51):

I should add, because I always forget to add this, only a locksmith, a registered locksmith, is supposed to be able to make that call.

EW (00:10:59):

But you did not need to make that call, because you just reverse engineered the software.

JR (00:11:02):

Right.

EW (00:11:02):

So you have one where you walk up to it, you take out its battery, you put in your tool, and then you get a number. You put back in the battery, you type in the number, and now you have an open safe and nobody can tell you made any changes.

JR (00:11:20):

Right.

EW (00:11:20):

And then you have a different one, where you do not even have to take out the battery. You just walk up to it, type some numbers, use a bit of software to find some other numbers, and then type those in. Now the lock is in factory default, so its password is stupid and you open it.

JR (00:11:47):

Pretty much. To the second, there is the caveat that if the owner of the lock has changed a couple of the default codes, you have to know those to be able to do that process. Whether it is with our software, or with calling SECURAM.

(00:12:02):

So if those have been changed, then you cannot do it. Or if it has been disabled, which it can be disabled, then you cannot do it.

EW (00:12:09):

But the information on how to change those, or even that you are supposed to change them, is supposed to happen when the lock is installed into the safe.

MO (00:12:21):

Yeah, I think it is quite hidden and obscure. They have a great webinar on YouTube titled "Locksmith only with drill points," that is for locksmiths about these locks. In there they say, "Yeah, these codes are these technical things that are here, but nobody ever changes them. Do not bother, there is no security impact."

(00:12:44):

I would be blown away if even sophisticated users of these, had concepts about these extra internal codes that affect the reset process.

JR (00:13:00):

The data point that I like for that is, we bought quite a few locks off of eBay, including locks that were locked and the seller did not know the codes for. On all those locks, the codes related to this process were the default. So we were able to do the recovery process.

EW (00:13:16):

The recovery process of resetting it back to zero. Not the, I want to say, cracking process of finding the code?

JR (00:13:28):

That is correct.

MO (00:13:29):

Although both-

JR (00:13:31):

Well.

MO (00:13:32):

Where they are applicable. But yeah.

JR (00:13:34):

Not exactly. The code snatch of actually reading out the code, only works on locks made during a certain period. So there is kind of a new hardware and an old hardware. I forget when the switchover was. I used to know.

(00:13:55):

Anyways. But the old ones, our tool does not work on. I think it would be interesting to look into them. Really old ones, there was another commercial tool that did the same thing on. Then there is a period where that tool does not work, our tool does not work.

(00:14:15):

We have heard through the grapevine that they have made some product change. So potentially the newest locks also it does not work on. But we have not verified that.

EW (00:14:25):

These tools, as the company sort of said, these tools now they looked very easy to use, but it was not like you just spent a few hours playing with this.

JR (00:14:43):

Right.

EW (00:14:43):

You did some serious reverse engineering. If I wanted to follow that path, after I got a few dozen instances of the lock, what would be the next step?

JR (00:14:56):

Well, the first step is always once you have the hardware, you have to get the firmware off. For that we attacked the debug port on these. They have a combined debug and programming port that does both.

(00:15:12):

The programming interface does not provide a way to read out the memory. The debugging interface does not intentionally or directly provide a way to read out the memory.

(00:15:25):

But what the debugging interface provides, is a method to write into the RAM any data you want. And a method to jump execution to a particular place in RAM.

CW (00:15:39):

Huh.

JR (00:15:39):

So you can upload a little program that reads every byte in the entire memory space. It is a unified address space on this part. And spits it out over the same serial port that is used for the debugger.

EW (00:15:53):

But surely they protected the debugger, and you could not just type at it.

MO (00:16:00):

Yeah, it is great. In this part they actually have a bunch of protections for this. They have a disable bit, so you can turn the debug interface off, and then you cannot send any commands.

(00:16:11):

And then they even have a interface where you can set a password. So even if the debugger is enabled, the password, you have to enter the right ten digit password, which would take heat death of the universe to guess.

(00:16:28):

I actually wrote a bunch of code on a Pi Pico to reimplement the debug protocol. Spent a bunch of time working on this, trying to get it to work.

(00:16:40):

The nice thing is the PlayStation guys, this is the same processor that is used by the PlayStation 4. These were exploits that they discovered for this processor and we were implementing, although they did not have a ton of great documentation on all the nitty gritty details.

(00:17:01):

As we were trying this out, trying to see if my implementation worked, and try the glitching parameters, just to debug what was going on, I tried to send the debug commands, not expecting to get anything back from the processor. Lo and behold, we got responses back from the debugger. So it actually turns out they did not disable the debugger.

(00:17:25):

And I set up this great glitch loop to go glitch the password, to try to figure out what was going on. I was not really having that much success with it. So I went just to manually inspect the data and figure out what was going on. I sent the debug password of all zeros, which is like the default disabled state, just as a test vector.

(00:17:45):

It turns out that the lock just unlocked with that vector. So they did not disable the debugging, nor did they set a password. So we did not need any glitching at all to get into this part.

CW (00:17:58):

Speculate on how that happens.

EW (00:18:00):

<laugh>

JR (00:18:00):

<laugh>

CW (00:18:00):

So you go to the trouble of having- You are a lock manufacturer, or at least a lock mechanism manufacturer. You go through the trouble of putting locks on your lock, and then you leave them unlocked. Is that basically what is going on?

MO (00:18:19):

I think they did not read the manual for their processor, to do things like disable the debug interface. Which Renesas has pretty good documentation on. It is quite nice.

EW (00:18:35):

They had a way to turn off the debugger and they did not do that, which sometimes that happens. And they had a way to put the debugger behind a password, which I have definitely done that one. But they left their password to be all zeros.

CW (00:18:55):

Yeah. Same as the launch codes for the nuclear missiles, for decades.

JR (00:18:57):

Exactly. Yeah.

CW (00:18:58):

Yeah.

EW (00:19:02):

I wanted you to say that, because I wanted to see Chris's face. I should have taken pictures for everyone. It was awesome.

JR (00:19:06):

<laugh>

EW (00:19:06):

But when I heard you say this in your DEF CON talk, I got weirdly angry.

CW (00:19:16):

Oh yeah. Yeah, yeah.

EW (00:19:19):

Who thought this was okay? What engineer out there said, "Yeah. That is good enough."

CW (00:19:24):

I think you are thinking about it the wrong way. I do not think it was a, "Yeah. That is good enough." I think it is just general, forgive me, incompetence. I do not think they knew they did it.

JR (00:19:36):

I agree.

EW (00:19:37):

They had to know that they wrote the password as all zeros.

CW (00:19:40):

That is just the default.

JR (00:19:40):

Yeah, that is just the unprogrammed state.

MO (00:19:42):

Yeah. You do not have to set anything.

JR (00:19:42):

Yeah.

EW (00:19:44):

So they just pulled this processor, and did nothing on top of what the processor does.

MO (00:19:54):

I think that is even a tactical issue, versus the larger problem. That indicates, "Hey, they did not really go through a security engineering process," is my impression after looking through the whole thing.

(00:20:09):

If we step back and think about the threat model of a safe. Its only job is to protect the stuff on the inside, from the people on the outside. A safe system has two parts. We call it the outside part, the keypad, where you type in the codes. And the inside part, the latch, which is the bit that unlocks the safe after the right code has been entered.

(00:20:38):

When I frame it like that and I say to you, "Hey, where should you store the codes?" You might say, "Well gosh. I am going to store the codes on the inside of the safe."

EW (00:20:48):

The inside of the safe.

MO (00:20:50):

Yeah, behind all the steel. Sometimes I will be designing secure products and be like, "Gosh, it would be great if our product was a safe, or full of metal." And this is a product that that is literally the function.

(00:21:01):

But no, the codes are stored on the outside of the safe. They get checked on the outside, and then they send commands to the inside, which are only slightly encoded, to unlock the latch. Just like this very basic threat modeling failure. And then the not locking the processor is just like a follow on effect of not really considering security very much at all, it seems.

CW (00:21:31):

As a devil's advocate, I do not know how expensive these particular safes and mechanisms are, but is there not an argument somebody could say, "Well, this is to keep the casuals out. If you want a safe that is going to keep out somebody with embedded systems knowledge or actual safe cracking abilities, then you should buy the more expensive one."

MO (00:21:52):

I think you know at that end, the nice thing is UL has a certification program for electronic locks, and so does the European Union. There is a UL standard for what they call "high security electronic locks," which is the standard that is required for you to use locks on pharmaceutical safes. So luckily these are certified to that standard-

CW (00:22:13):

Woops.

EW (00:22:13):

<laugh>

MO (00:22:15):

For high security electronic locks.

CW (00:22:20):

Well, devil's advocate loses.

EW (00:22:28):

You titled [it] "Cash, drugs and guns," which definitely brings up gangster vibes. But then you say pharmaceutical safes, and these are safes used to hold dangerous-

CW (00:22:41):

Controlled substances.

EW (00:22:42):

Controlled substances worth a lot of money. And the reason you are doing this is to abide by laws. You have to put some things in safes. Not because you are trying to hide them, but because that is the correct thing to do. And yet these locks are certified to be fine, and yet you can- Who is in charge here? I have notes.

JR (00:23:17):

Well, the UL standard basically covers a lot of good mechanical stuff and good mechanical test methods.

(00:23:25):

I do not remember exactly what it had to say about electronic test methods. It had some stuff in there like, "Okay, if I jam mains electricity onto the battery port, that does not open it." The number of codes you can have or that sort of thing.

(00:23:44):

But as far as what it had to say about cybersecurity, it was basically very vague. It left it up to the interpretation of if you wanted to go do all the stuff we did, that would maybe in some kind of sense technically be valid. But there is no requirement to really do any particular cybersecurity analysis. Mark, is that more or less right?

MO (00:24:12):

Yeah. I think the UL standard does not- It was written based on the mechanical lock standard. It really does not contemplate embedded security in the way that we think about it today.

(00:24:31):

Thinking about, "We think about what we are going to design before we design it. And we consider what we are going to do. And we take mitigation measures. And we at least document that we thought about it, and what we did."

(00:24:43):

It is very mechanical focused. They have things like, "Oh, it has to be six digit code." And like James said, you cannot- There is a technique called "spiking the lock," where you just apply very high voltage to the communication pins. In older lock models, like more than ten years ago, that might have just burned through the processor and opened the solenoid and they say, "Oh, you cannot be vulnerable to that."

(00:25:09):

But this notion of modern embedded security is really absent from the UL high security safe standard.

EW (00:25:19):

But EU has standards that should be relevant. They have the Cyber Resilience Act, and then I think ETSI EN 303 645, blah, blah, blah, Cyber Security for Consumer Internet of Things: Baseline Requirements. There are good standards on security by design in electronics. But those are not related to the locks?

MO (00:25:50):

The lock manufacturers with a lock standard creators do not have that incorporated in their standards. There is even the European one, which is EN 1300, that is a more comprehensive standard than the UL one. But it really similarly does not integrate the modern embedded security standards, like you mentioned.

(00:26:16):

All the work that the FDA has done to set those standards, CISA and NIST, and as well as the automotive manufacturers. The lock world is way behind, which is wild when you consider the purpose of a car is not to secure valuables, but the purpose of a safe is only to resist attack. That is its only purpose in life.

EW (00:26:42):

Do they think that this is not an Internet of Things thing, and so therefore security is not relevant?

CW (00:26:48):

Well, it is not connected to the internet.

EW (00:26:50):

Yes, that part is true.

MO (00:26:52):

A lot of them are.

CW (00:26:53):

What? Wait. Say that again.

MO (00:26:54):

Yeah. They have great models of this that are connected to the internet.

CW (00:26:57):

Why, why and why?

MO (00:26:59):

Wi-Fi and Bluetooth, and-

CW (00:26:59):

No!

EW (00:27:02):

No. It would be useful. For pharmaceuticals, knowing which safe got open today, and being able to track data and lot numbers. It would be totally useful.

CW (00:27:11):

I have half a dozen ways to do that, that do not involve connecting it to the internet.

EW (00:27:15):

Like scanning stickers?

CW (00:27:16):

Or exposing it to Bluetooth! <laugh> Okay. Look. What would you suggest I put my drugs in?

JR (00:27:32):

A little bag or something. <laugh>

EW (00:27:37):

So there are internet locks. But let us not worry too much about those. They can only get worse that way. What should they have done? If I get hired by these people tomorrow, what should I do?

MO (00:27:56):

The first thing is, like I said, when you think about the threat model, the easiest win you can do is put the codes on the inside of the safe.

EW (00:28:04):

Yeah. Yeah.

MO (00:28:04):

Because at least your lock is not making the safe worse. If you have to cut through the safe to get to the lock, the electronics, then you have cut through the safe, so your security has died.

JR (00:28:16):

One of the tricky things about this is actually safe codes are quite short. They are six or eight digits, depending on the standards that you apply. Like I said, the highest security standards are eight digit codes.

(00:28:30):

Even if you had hashing or bcrypt or something like that- I did some math. If you took the processor on this, which is not a slouch, and you set the work factor so high that it took 20 hours to do all the bcrypting to check the hash to open this, you could with a 4090, you could exhaust the entire space in less than a day. So there is no hashing that helps.

(00:29:05):

The nice thing is the only thing you can do is put it inside the safe, which is a really easy thing to do. Then all your problems are second order safe problems, like side channel analysis or other kinds of analysis, where you are reading tea leaves over the data cable between the keypad and the bit inside. Instead of putting logic on the outside that does stuff.

CW (00:29:35):

Yeah, what was the thought process of-

EW (00:29:37):

Because they are lock manufacturers. They make this one part-

CW (00:29:40):

Ah, yeah, and they cannot integrate that way. Sure, sure.

EW (00:29:41):

And then it is an on off to a solenoid.

CW (00:29:42):

Yep, yep. Yeah, yeah.

EW (00:29:44):

And then if somebody else provides power to the solenoid, you only have to make the keypad as one single thing.

CW (00:29:51):

But there is an argument like, "Okay, if I am going to spend 20 hours doing something brute force, how is that different from getting a diamond tip drill press and just physically breaking in?"

MO (00:30:03):

Oh yeah. That is why I said 20 hours- That is like if you entered the code and it took 20 hours for the safe to tell you your code was wrong.

CW (00:30:10):

Oh, I see.

MO (00:30:10):

That is what I was telling you.

CW (00:30:11):

Oh. Okay. Okay, okay.

MO (00:30:12):

The reason I chose 20 hours is because the UL standard actually says that the code needs to resist, the lock needs to resist, attack for 20 hours. That is basically them saying, "Hey, this cannot be the weak link. We want the physical safe to be the weak link. We want you to say, 'I might as well get a diamond tip saw,' not, 'I am going to bring my ChipWhisperer in here and attack this.'"

CW (00:30:33):

Got it. Okay. That makes sense.

EW (00:30:37):

When you were starting this process, did you get a ChipWhisperer and all the glitchy things and a couple of JTAG different units, and sit down prepared to crack this the hard way?

MO (00:30:56):

It is funny you say that. There is some more history about how we got into this. I read an article by the New York Times about how when- There is a team doing January 6th investigations, and they went to this person's house. They had a safe from a company called Liberty Safe, that uses- They are a safe OEM, that by default uses these SECURAM locks.

(00:31:23):

In the article they said it was a kerfuffle between Liberty Safe and their customers, because Liberty Safe- The FBI called Liberty Safe and said, "Hey, can I get the code to open the safe?" And they gave them the code, and they were able to open it.

(00:31:40):

I read that and I was like, "There is no way that that is implemented securely."

CW (00:31:46):

Oh. No. Yeah.

MO (00:31:48):

So we bought a different model of this lock. This whole time we were talking about the ProLogic series. We bought an earlier version of that, called the "ScanLogic series." Those are the much cheaper, like the very cheapest locks. They have no screen. They are just, you press the button and they beep. They actually have no logic in the keypad. All the logic is inside the safe.

EW (00:32:12):

Yay!

MO (00:32:14):

We spent a long time analyzing that. There is a processor, this new low cost processor. That one was so old, that there was literally no way to read out the memory. There is no debug port on it.

EW (00:32:26):

Yay!

MO (00:32:26):

We spent several months developing a completely novel way to dump memory from that chip. We gave a talk on hardwear.io about that chip. Really fascinating, really cool way that we did it. And we did do all that stuff. We not only used ChipWhisperer, we used really high-end PicoScopes. We subjected it to the best attacks that we could create tons of analysis.

(00:32:56):

We determined that the only possible vulnerability was that the code they used had a non-constant time compare. So it would bail out of the compare loop early, if your code stopped matching at some point. Then you can use some timing analysis and stuff. We actually were unable to even get a proof of concept of that working at all. So that the lower end models were a lot more secure, and that did not work.

(00:33:31):

But then that took us to the higher end models with the screen. That is where we, like you mentioned, we opened it up, we did some recon. We figured out that, "Oh yeah, these are in PlayStations." And from there it is easy, because those guys are relentless.

CW (00:33:45):

Can I go back to that? Why are the- Are these support chips for PlayStations? What is the role of the-

MO (00:33:53):

I have no idea. I think they are the part of the platform management.

JR (00:33:56):

So the basic bring up and all that stuff.

EW (00:34:01):

But yeah, because it is in the PS4, they have been relentlessly hacked.

CW (00:34:07):

Yeah. Well known.

EW (00:34:08):

By people who have lots of time and view the video game as cracking the PS4 security. And having it be part of the system management means you can break interesting things.

CW (00:34:22):

Let this be a lesson to your kids. Pick an obscure chip that is maybe 40 years old, that people are not using anymore. Nothing wrong with an 8051.

EW (00:34:31):

The Liberty Safe and the key to the FBI, that was probably using the, "I am a locksmith, so give me the reset methodology"?

MO (00:34:44):

It is funny. It actually is not. The reset feature is actually a feature only on the high-end locks.

(00:34:54):

What is going on on the low-end locks is actually something quite boring. The low-end locks only support two user codes. One is called the "manager" and one is called the "user." When you buy a Liberty Safe, the factory programs the manager code to something, and then they set the user code to all ones.

(00:35:14):

In the manuals you get from Liberty Safe, they do not tell you about the manager code. They only tell you about the user code. So the way that they did that is not through any complexity, it is just this management vulnerability. They set a code on it. They are a user on your lock, and you just did not know.

CW (00:35:34):

That feels actionable, legally.

JR (00:35:40):

These days they say you can write to them and ask them to delete it, and they will. Or at least they will say they will. I have no reason to doubt it.

EW (00:35:50):

If you know to do that.

JR (00:35:51):

If you know to do that. I know they also changed their policy after that, to require a subpoena. They made it a higher legal burden for whatever authority to get it, rather than just basically asking in connection with a warrant. Still, somebody has your code.

MO (00:36:13):

And hence why there was this controversy that the New York Times reported on.

EW (00:36:19):

I still feel like the lock manufacturers do not really understand what their product does.

MO (00:36:25):

It has been interesting to connect with a lot more people in the lock industry, since we gave the talk. I agree, there is this, it is like people who are developing secure software in medical and cars and all these other areas, are all together mixing around doing good stuff. And the lock people are on their own little island.

(00:36:52):

There is- One of the things that was most surprising when we really started digging into this field, is there is a long proud history of people exploiting bad design in safe locks.

(00:37:06):

Most of it is actually private. So there are these tools, called the "Little Black Box" and the "Phoenix Tool." Those are commercial products, sold only to locksmiths, that can actually unlock several dozen lock models that all have various vulnerabilities.

(00:37:26):

This was kind of wild to me. There are this long list of locks, that are- Not only have vulnerabilities, but are known [and] exploited. Because you can purchase tools as a locksmith, that unlock all these models, from all the major manufacturers, over this long period of time.

(00:37:45):

So it is not even like this is the first time it has been done. People have been breaking these for years, and they have not gotten substantially better.

CW (00:37:56):

That reminds me of the- I do not know if you have seen the LockPickingLawyer YouTube channel?

MO (00:38:01):

Oh yes.

CW (00:38:03):

This goes for all kinds of locks. You can go to his website and buy, for educational purposes, things that basically take the place of an entire pick set. And automatically pick in three seconds most kinds of locks you have on a house or padlocks and things.

(00:38:19):

But those kinds of locks are in a different situation, right? Because the safe and the lock on the safe is an entire system, that is designed to protect the interior space. Where a lock on a house, everybody realizes if somebody really wants to get in, they are going to break a window. They are going to apply lots of kinetic force to the door. Because it is meant to discourage, not completely prevent access.

MO (00:38:47):

One of the things we figured out is also that is actually the same as a safe. If you put a safe in a field, people are going to get into it.

CW (00:38:54):

Eventually. Yes.

MO (00:38:54):

It is part of this layer defense, for everything. All security is like that. But my goodness, I agree with, they are designing products that are only supposed to be secure. It just blows me away that they are not applying the best in class security, to this very constrained embedded system.

CW (00:39:14):

I would argue they are not even applying basic security, given what you have said.

JR (00:39:22):

It feels like within when we are doing medical products, or whenever people are doing products for industries that have standards for this sort of thing, you apply a certain way of thinking about it. You have a secure product development framework. You have standards you are trying to meet, that actually say something about cybersecurity.

(00:39:41):

It just feels like this is a case of just making something. I think we have all been there. You make something and it is not going anywhere serious. You do not think about it that hard. It is like, "Okay, well it works. I forgot to disable the debug port. Ah. I did not even think about it."

(00:39:57):

If you do not have that checklist and that standard in place, it is very easy to just make something that fulfills all the visible requirements. Without having properly analyzed the things that you cannot see so immediately on the surface.

EW (00:40:17):

Okay. Yeah. 25, 30 years ago, yes. Now there are checklists. There are people who want to help you. Not even- There are people who want to help you for money. But there are people who, I do not know, give away free information at DEF CON.

CW (00:40:36):

And there are a lot of people who will yell at you, apparently.

MO (00:40:37):

But until you get yelled at, you might not know about those things.

CW (00:40:42):

Yeah. Yeah. Yeah.

EW (00:40:43):

But you actually delayed this presentation by quite a while, because you were disclosing to the manufacturers.

JR (00:40:52):

That is true.

EW (00:40:54):

Instead of saying, "Thank you. Oh my God. We will fix this as soon as we can," said, "We are going to sue you."

MO (00:41:04):

We have got to be very precise. They never said they would sue us. I do not remember what the exact wording was, but it was implied, perhaps.

EW (00:41:15):

I believe it was something along the lines of, "If you go public with this, we will sue you."

MO (00:41:22):

Along those lines.

(00:41:24):

Yeah. "We will refer this matter to our council for trade libel, if you choose the route of public announcement or disclosure."

JR (00:41:29):

That is right.

CW (00:41:30):

Hm.

EW (00:41:30):

That is a little scary.

JR (00:41:34):

It was.

MO (00:41:35):

We were trying to responsibly share with them. We actually reached out to both SECURAM and Liberty Safe in March of 2024 and said, "Hey, we are kind of looking into this. We wanted to let you know. We would love to be connected to the right technical people to disclose stuff, if we find something."

(00:41:58):

At that time, we had not really distinctly found anything. But we had definitely gotten deep enough into it, that our spidey sense says, "There is going to be stuff here." They were initially warm and they said, "Oh, hey. Thanks for letting us know. That is great."

(00:42:16):

Then in April, we sent them a detailed technical disclosure about a bunch of things that we found. We actually had some additional findings that we did not share at DEF CON, just because they were not exciting. Very soon after that, that is when they talked about, "Hey, we are going to-" They implied that they are going to sue us, if we talk about this publicly.

(00:42:44):

We were sure to make sure that contacted them well ahead of the standard disclosure timelines and stuff like that, so they had plenty of extra time. They were not really forthcoming to that.

(00:42:59):

As a result of that, we actually got connected with the EFF. At the year of that DEF CON that we were- I think it was the year we were going to go, another talk was given about vulnerabilities in lockers. So there are electronic lockers that are at gyms or things like that.

(00:43:27):

They, at the nth hour, got legal notices from the company that they were disclosing vulnerabilities to, and the EFF stepped in. They have this project called the "Coders' Right Project," that helps people in that kind of situation to communicate with the companies, and provide representation through that, just in the scope of talking back and forth with them, with the manufacturer.

(00:44:02):

We got in contact with them. They were just unbelievably fantastic and supportive through the whole process. They helped us write letters and communicate with the company, and write letters on our behalf, as our attorneys to them.

(00:44:24):

Eventually we got to the point where they were salty about it. But the EFF had convinced us, and we had together gone through the responsible disclosure and talking, that, "Hey, we should go ahead with it." Even though they at the time said, still their position was, "We will refer this matter to our council, yada, yada, yada."

(00:44:56):

Only after we said, "Hey, despite you saying this, we are going to go present," did we actually hear from SECURAM lawyers, in July, the month before August when we were going to present. Talked with them a little bit, and ultimately before and after. Since then we have not had anything happen. But yeah, it was a long tense process.

EW (00:45:25):

So there have not been repercussions yet.

JR (00:45:31):

Right. I think one of the things that the EFF made sure to emphasize is, you cannot stop someone from suing you. But you can take steps to protect yourself, and to reduce that likelihood, and to try and ensure that they could not get very far if they did. That is the sort of thing they helped us with.

(00:45:58):

But I guess anytime in the next seven years or however long it is, maybe we could get a nasty letter. I do not think it is going to happen at this point.

EW (00:46:10):

There do exist many other tools that are marked for certified locksmiths. But the tools exist, and I am sure they fall into the wrong hands occasionally. So what you did was cool, but it was not like the only thing out there.

JR (00:46:32):

The big difference is- You are right. The big difference is we went public with it. We wanted to get the word out there, and spread it as much as we reasonably could. Or at least get it out enough, that it was likely that people who own these locks would be able to understand the security profile, the real security profile, of the device that they had. Just so they could make a more informed decision about whether they wanted to have that on their safe or not.

(00:47:04):

As opposed to the locksmith tools, which keep a very low profile. They do not really advertise outside of locksmith industry publications. They do not go to DEF CON. They do not ask the people who are integrating those locks, "Hey, did you know about this?"

(00:47:25):

This is interesting. We heard from a couple people in that industry, that the MO there was if you were making this sort of tool and you found a vulnerability, you would put it in your tool, tell the manufacturer. The manufacturer would quietly fix it, and then your tool would not work on new versions. But that was about it.

EW (00:47:55):

But as a locksmith, you are making money off of other people not knowing the vulnerability. You are the only one who can open this type of lock. There is a lot to be said for a lock's main job is to not be opened by people who should not open it.

(00:48:12):

But there are always going to be locks that need to be- I remember talking to my stepdad who was a truck driver and very into cars. I was worried that the car I was looking at did not have great locks.

(00:48:31):

He explained to me that the locks on a car are not really there to be great. They are there for casual people not to walk up to your car. And that it was okay. You actually wanted your car to be openable by a locksmith, in case anything went wrong.

(00:48:55):

Even for safes, there are times that you want to be able to open a safe, theoretically.

JR (00:49:04):

Mm-hmm.

MO (00:49:07):

I sincerely agree the- We have tried security through obscurity before. It turns out it is terrible. Security through obscurity just means only the bad people have it, and the good people do not know about it.

(00:49:18):

Security out in the open means that now we have leveled the playing field. I found it wild, the industry wide view that security through obscurity was best, in locks and locksmiths and safes and stuff like that. I think that that is holding the whole industry back, in terms of making actually secure products.

JR (00:49:48):

I agree, because I think that what you are saying, Elecia, is true. Sometimes you do need to get into a safe where you have forgotten all the codes. They have tools for that, which is a big drill that drills through the safe and breaks the lock off. And if you cannot stomach that, then maybe you should remember your codes.

EW (00:50:08):

Usually it is somebody else's codes, and somebody has probably-

JR (00:50:11):

Yeah, that is probably true.

EW (00:50:15):

Okay, so you have not said, you both are still working for a company, a professional services company that does engineering. So you have not gone on a crime spree? And you have not been contacted by people who are willing to pay you a lot of money to open safes?

CW (00:50:35):

<laugh>

JR (00:50:35):

Mm.

MO (00:50:38):

No, we have not had anybody- Well actually we have had a bunch of people reach out and are like, "Oh, that was cool. I would love to get all the code and docs, because- Did you share the GitHub repo with your Pi Pico? Or, did you share the 20 lines of JavaScript that calculates recovery code?" And I said, "Absolutely not!"

(00:50:59):

So it is ripe, ripe for abuse. We have definitely declined to share it with all the people who reached out and said, "Hey, I would love to get access to the GitHub." Or, "Where is the proof of concept?" Or all that kind of stuff.

EW (00:51:13):

Mark, I think I understand why, but why are you not just making this public? The way you did it made it clear that someone with medium amount of familiarity with the art, could probably replicate what you did in less than six months. Why are you gatekeeping, Mark?

MO (00:51:40):

<laugh> I think that the potential for abuse for this we felt was extremely high, because there is a huge install base for these locks. And, because the techniques we discovered were so simple.

(00:51:55):

We wanted to make sure we shared as much information as we can about the bad engineering. So I hope that people watch the talk who make locks and are like, "Oh no. Let me go look at our system." And they understand enough to do that.

(00:52:10):

But without sharing things like, "Hey, the exact constants that they use to compute stuff is X." Or, "Gosh, here is the exact series of protocols, or the memory map, about how they save the codes."

(00:52:24):

Those tactical details, I do not think add to the body of security knowledge, the stuff that people who are engineering products should think about. The only thing it enables is people with significantly lower skill or significantly lower effort investment, to go do something on a nefarious spectrum.

EW (00:52:49):

How do you feel about Flipper Zero?

MO (00:52:52):

It is interesting you say that, because all the stuff that I see on Flipper Zero is actually, and I do not have a great deep context about it, but it is mostly taking stuff that was already public. Like, "Hey, here is how we can troll people by opening their Tesla port covers." You could already do that with a bunch of other software-defined radio and running Python and stuff like that.

(00:53:18):

It has just made it way, way, way easier. So now it is a zero skill thing that people can just do. I have not- There are probably people out there who are doing this, but all the stuff that I have seen from it are not serious security research. It is more taking serious security research and productizing it.

(00:53:45):

I have not heard anything that makes me think like, "Oh man. This is terrible." It is mostly, I think stuff that is pretty far down the pipeline, and maybe boosting the visibility of it. So I do not know that I hate that exists, but I definitely do not see it as a serious security tool. Or, I do not have a perception that is a serious security tool, or something that is used on the research creation side kind of thing.

EW (00:54:15):

Boosting the visibility of the lack of security is probably a very good thing, even though some of these tools have the potential to lead to bad things.

MO (00:54:31):

Yeah, I agree. I think if you are doing a good job, people who are discovering those vulnerabilities are disclosing it to the right people, and they are making good faith efforts to update.

(00:54:46):

Tesla is a great example, because they can update their cars all remotely, and they certainly had- Well, I do not know the background on the whole thing, but I suspect they had plenty of opportunity to do that, to keep that from happening. In that case, it is probably a good thing to push people to make the right changes.

EW (00:55:07):

James, you mentioned you bought some locks on eBay, that eBay vendors said they could not open.

JR (00:55:14):

Yeah.

EW (00:55:14):

Does eBay also sell locked safes?

JR (00:55:18):

I have not looked for locked safes on eBay, but I do- There is a public surplus auction here in Arizona, that I browse idly because they had something good once two years ago, and I am just chasing that high.

(00:55:34):

Not infrequently locked safes come up. I always have to zoom in and see is it a SECURAM ProLogic. Maybe speaking to the true market share, so far it never has been, but I always check.

EW (00:55:48):

If you saw one of these in the wild- You walk into your favorite sandwich shop, and see they have a lock of the kind you know how to open. Would you ask the clerk if you could just try?

JR (00:56:07):

I would not. I think Mark probably would.

MO (00:56:09):

Well it is funny. One thing I wanted, I desperately wanted, in the DEF CON presentation that we did not get, is I wanted to show a video not of a demo of us opening a hot pink safe on stage, but a video of me opening one at a dispensary that was full of millions of dollars of cash, or opening one at a CVS.

(00:56:30):

But you might be surprised to know that if you ask people, "Hey, can I show how we can pop open your safe full of cash or drugs?" They are not really excited about that opportunity. So we did not end up doing that.

EW (00:56:49):

But you still could. Now that you have done this, do you have- What is the next New York Times article you are after?

MO (00:56:58):

We gave a presentation at hardwear last year, which is kind of like a check-in. Where we developed some novel- Or we kind of somewhere between developed some novel, and expanded the scope of some previous, STM32 exploits to dump code.

(00:57:15):

We developed those because we have been working on and off on hacking the parts of fridges that implement DRM for water filters. That is kind of the next thing in our pipeline that we are working on.

EW (00:57:30):

DRM for water filters.

CW (00:57:32):

Seems a little safer. Not as likely to have- Yeah.

JR (00:57:35):

You never know.

CW (00:57:36):

Organized crime come after you.

MO (00:57:40):

Safer for that. Sure.

EW (00:57:45):

Could you mention something about- This was hardwear.io. Is the presentation online?

MO (00:57:52):

Yeah, so we have talked hardwear.io three times in a row now? James?

JR (00:57:58):

I have been there three times in a row. You have been there two.

MO (00:58:01):

That is right. Yeah, we go to DEF CON and other places, and they are not hardware focused, embedded hardware. They are all about like, "Hey, we exploited Excel or Access or something." I am like, "Oh man, luckily my products do not run that stuff."

(00:58:20):

It is the best place for people who are working in embedded security and hardware security. And so yeah, we have given some talks there. They are all available on YouTube, and a bunch of other really talented people. So you can definitely go, I definitely encourage you to go check them out.

EW (00:58:39):

I will make sure to link to your videos and the site, which is hardwear, HARDWEAR. So basically hardwear as in wearing hardware.

JR (00:58:52):

Yeah. I always thought it would be about wearables or something.

EW (00:58:54):

I did too.

JR (00:58:55):

It is not. I will say if you want to see more of the hardware hacking type stuff at DEF CON, shout out to the Car Hacking Village. That place is cool.

EW (00:59:07):

I know in the past, Mark, you personally have given advice on our Slack channel, on the Patreon Slack channel, about getting better at security. Do you want to share a little bit about that? I guess both Mark and James. How do I get better?

MO (00:59:26):

The nice thing is lots of people have been making lots of great content, about getting better. There is the Cyber Resilience Act and NIST, which has been helping to lead the charge on making security for general devices. How do you think about it? How do you do it? How do you get started? What is the first thing that you do?

(00:59:51):

They have some great documents. The NIST internal report, or NISTIR 8259, is their standard that has checklists for the two parts of security. Which is building the device, the technical checklist, and then the part that everybody forgets about, which is the post-market. What do I have to do after it is in the field? I cannot just chuck it out the door and forget about it.

(01:00:19):

Those are actually the standards that were developed in conjunction with the US Cyber Trust Mark standard, that is designed to set via certification for a baseline level of security for consumer IoT devices.

JR (01:00:37):

Yeah. I would say following all the standards, very important. Do cross all your I's, dot all your T's, and do not skimp on any of it. What I like to think about is also just security at a product level. Thinking about- It is threat modeling. But it is also how do those threats apply to the physical configuration of your system, if it is an embedded system.

(01:01:06):

Mark likes to call this system theoretic threat analysis. Because with something like the SECURAM ProLogics that we exploited, it did not come down to buffer overflows or side channel analysis or gadgets or ROPs or anything like that. It was just that it was designed in a way that did not lend itself to security.

(01:01:37):

No matter how good the implementation of that design had been, it is still worse than putting the codes on the inside of the safe, having a debug port that can be locked in a more secure manner, or locking it at all.

(01:01:55):

So it is important to- If you start by thinking about it correctly, then even if your implementation sucks, you still probably have way better security. Just by organizing your data and your hardware in a way that places what you are trying to protect as far away as possible from who is trying to attack it.

EW (01:02:18):

I worked with a guy who would say that one of his main goals in his career, was not to end up with his face on the cover of WIRED as being an idiot.

MO (01:02:27):

<laugh>

JR (01:02:27):

<laugh>

EW (01:02:30):

It seems like a low bar, and I feel a little bad because I know that the person who worked on the lock security probably did not have time or was just out of college. There were lots of reasons, but whoever left the password as all zeros, and did not turn off the debug port, does deserve to have their face on the cover of WIRED as, "Do not do this."

CW (01:02:56):

Aarrh.

EW (01:02:56):

Maybe a cartoon of their face.

JR (01:03:04):

I think it is more about organizational engineering rigor. Yeah, everybody needs to take some responsibility. But as you say, I can easily imagine somebody fresh out of college, somebody who had never worked on a security product before-

EW (01:03:18):

Who did not know chips had security.

JR (01:03:20):

Did not know chips had security, had never gotten yelled at yet.

(01:03:23):

If your engineering department is not having the protocols in place to have somebody review it, to do that threat modeling, these are the kind of failures that I like to call it a team effort. Right. It is hard to identify any one person. Or maybe if we knew their work chart, maybe it actually is just one guy and he is not very good, but-

EW (01:03:54):

I imagine it was coded to specification, and the specification was just very bad.

JR (01:03:59):

Exactly. So what I am saying is it should be a group picture on the cover of WIRED.

MO (01:04:03):

<laugh>

EW (01:04:05):

Ah, yeah. Just get that whole team down there.

CW (01:04:08):

Company logo.

EW (01:04:09):

Thank you both for talking to us. Mark, do you have any thoughts you would like to leave us with?

MO (01:04:14):

I think sometimes people see security, there is this asymptotic thing, if I work on it for infinitely long, people who are smart enough can go break into it. And the nice thing about asymptotic curves is the opposite is also true.

(01:04:28):

There is the 80:20 rule. If you do a little bit of effort, you are going to get huge returns. And if you do some, even if it is not perfect, even if it is not awesome, you are going to be way better off than doing nothing.

EW (01:04:44):

And James, do you have any thoughts you would like leave us with?

JR (01:04:47):

For as much as we have ragged on SECURAM in this presentation and what we talked about today, I do want to bring it back to the standards. Yes, I think they should have built a better product.

(01:05:01):

But they are also not wrong when they say it complies to the UL Type 1, whatever the number is, for high security electronic safe locks. Both companies and consumers should be able to trust that if that is on there, it means something. So it is a team effort, like I said, but that does not end at SECURAM's front door.

EW (01:05:26):

That is a good point to make. Our guests have been Mark Omo, engineering director at Marcus Engineering and James Rowley, senior security engineer at Marcus Engineering.

CW (01:05:36):

Thanks to you both. It was good to talk to you.

MO (01:05:38):

Thank you.

JR (01:05:38):

Thanks for having us.

EW (01:05:40):

Thank you to Christopher for producing and co-hosting. Thank you to our Patreon Slack group for their encouragement. And thank you for listening. You can always contact us at show@embedded.fm, or hit the contact link on embedded.fm.

(01:05:56):

And now, well, there are a lot of security quotes, and I just could not choose. So let me tell you about one of my new favorite creatures. This is a quote from Robin Wall Kimmerer, in her book "Gathering Moss."

(01:06:15):

"The water bears simply shrink when desiccated, to as little as one eighth of their size. Forming barrel shaped miniatures of themselves called "tuns." Metabolism is reduced to near zero and the tun can survive in this state for years. The tuns blow around in the dry wind like specks of dust, landing on new clumps of moss, and dispersing further than their short water bear legs could ever carry them."