Legacies | Hardy Hardware

  |  Compiler Team  
Desarrollo profesional
Historia de la tecnología

Compiler • • Hardy Hardware | Compiler: Legacies

Hardy Hardware | Compiler: Legacies

About the episode

Old hardware needs love and attention. But not too much—you don’t want to upset the status quo. Legacy servers, devices, and machinery are probably part of every company’s can’t-lose infrastructure. And it’s not always possible to replace.

Hardware has a knack for sticking around. Knowing how to keep it running can help avoid catastrophe.

Compiler team Red Hat original show

Suscribir

Subscribe here:

Listen on Apple Podcasts Listen on Spotify Subscribe via RSS Feed

Transcripción

Last episode started our new series on legacy technology. Today, we're going to cover some of the more obvious denizens of planet legacy—hardware. And Aaron Lee is going to help us understand why it gets so much hate. To a certain extent, hardware is the loadstar of all problems. I think that no matter what problem you're facing, if you peel it back far enough, hardware is going to be the issue at the center of it. The hardware is literally the thing you interact with, and aging hardware is everywhere, but you can't always simply transfer your stuff from one system to another, and the further back you go, the less you can separate hardware from the software that's running on it. And that makes things like upgrading aging hardware difficult, even risky when it's running a system your organization can't function without. You end up with this kind of onion of hardware that you end up with this really antiquated hardware at the center of these problems. And a lot of our modern hardware is just built on these bones of ancient, decrepit hardware and it kind of comes to the center of a lot of the problems we face in day-to-day life. So why are these hardware onions going to make us cry? What kinds of problems can they cause and how do we work with older hardware to minimize the chances of catastrophe? This is Compiler, an original podcast from Red Hat. We are your hosts. I'm Angela Andrews. And I'm Brent Simoneaux. We go beyond the jargon and simplify tech topics. We're taking a few episodes to consider the value of legacy technologies. If you want to hear this series from the beginning, you can start from our episode, In Defensive Legacy. Today's episode, Hardy Hardware. French producer, Johan Philippine, is here to cook this onion into a fine soup. Hardy, har, har. That's what I was thinking as soon as I said it. Same. Despite all my time on this earth, I'm continually surprised by how quickly some of my devices fail and by how incredibly long others have been in use. It's part of the reason I hoard old cables and adapters in my closet like some sort of tech goblin because you never know when you might end up using it. That's right. You never know when you're going to need it, right? I mean, just the other day I dug in there and needed a cable for something and it happens all the time. Happens all the time. I have a whole tote. You have a tote bag? Of old cables and whatnot. Yeah. Angela, do you have any stories about working with old hardware for your job? Yes, I do. And in my previous position, we had this server that was around way before I got there. No one could remember the details of it, and it was still a network. It was still running something. I think it had to do with facilities and it was running this old version of Windows and it was the "Don't touch it because it works." But what about updating it? "Well, you know, it's too old to get updates, but you want to check on it. If they say that they can't access it, you want to see what's going on." It was like sitting on the floor in the server room, so it was getting kicked and— It was just, but it ran and it ran up until I left. So I'm going to assume it's probably still running. That was one of those—The uptime on it was amazing. We had a couple of servers that had long uptimes. I think for some it was like a badge of honor like how long can this server stay running? It's hysterical. Amazing. Well, at the top of the show, we heard from Aaron Lee and he's a tech native professional who's worked in a variety of customer service roles across industries. Now, hardware was at the heart of these roles and much of the time those machines were older than their operators. We asked Aaron what could happen when people are expected to use technology that they haven't been trained on. In one word, chaos. You have someone coming in without experience and you have someone coming in that is so many steps removed from the process that they're relying on that they don't even know what the most basic step to even start is. I've seen personally professors struggle to turn on a projector because that projector is older than almost they are. Chaos. Projector? What the heck is a—? You know what a projector is! LED? No. The plastic transparency projector. Oh yeah, I remember those from school. Yeah. To be fair, projectors are notoriously finicky devices, some of them, maybe not the transparency ones because generally they just had that big button, but the LED ones, those can be can be tough. And when you start interacting with older technology, there are probably a whole host of unfamiliar aspects, obsolete ports, interfaces that aren't intuitive in the present context and just a whole bunch of other things that you might run into that you wouldn't expect to. Now for those unlucky professors, it's old projectors. For our audience out there, it could be an old server like Angela was talking about. It could be a decades old industrial machine like we heard from in our previous episode. It could be communications devices, old network infrastructure. I mean, the list is almost endless. Now, some of these devices are very old but also very specialized. Working in academia, we had a laser cutter and a laser cutter is an extremely advanced piece of technology, but it's also an advanced piece of technology that we've had. It's been around in fairly small form factor for a long time. It's also an extremely high replacement cost item. So if it still works, no one's going to approve a line item for that. Mm-hmm. Mm-hmm. If it's not broken, if it's doing the thing that it needs to do, there's no need to fix it right? The problem with that laser cutter is that it was only capable of running on Windows XP. It could do its one job very well, but the circuitry prevented it from having an upgrade path, which was a security risk. Now, as long as they maintained an air gap and didn't connect it to the internet, there wasn't a problem. But again, this was an academic context and there was an available ethernet port that they had to keep people from plugging things into. And as we all know, students love to tinker with machines and find ways to break them as much as possible. So... Remind me what year Windows XP came out? Windows XP was early 2000s. I want to say like 2001. Yeah. Early 2000. Oh my goodness. I mean it's been out of... We've had Windows 7 since then. Windows Vista, even before that. Windows Vista. Windows 7. Windows 8, Windows 10. Now we're on Windows 11. So that's five different iterations of Windows beyond XP that we're on now. I bet you this laser cutter's still running XP. Oh, I'm sure. Yeah, no. Yeah. Now, Aaron was telling us about laptops that students and staff would receive to help them complete their coursework. Sometimes it was brand new, but if you're lower on the list of priorities like a part-time student or faculty, or if you're an adjunct or emeritus professor, you could be assigned a stock computer, which most of the time meant older systems. Now, for those in engineering programs who had to run programs like SolidWorks, CAD, et cetera, those laptops might not be up to the task or just barely meet the minimum requirements. Those programs use more resources than older computers provide either because they don't have enough computing power, enough RAM, or a fast enough graphics card. So you had the situation where we're asking people to complete assignments and do exams and make these simulations and 3D models, except their computer will not run it properly. So they're kind of chugging along and typically you'll chug along until you absolutely can't. So you have the situation where you have a device that is barely hanging on, isn't running well to start with. They're running this program, it's taking up 100% of the system resources, and then they just tried to save something and now it's white-screened. This is, of course, the first time that they saved because they've been working on it for 20 hours straight. So, now, how do you salvage this without losing everything that they've just done on this computer that really should never have been issued for this purpose? This is the nightmare student scenario, right? Angela, I just saw you doing the cross that... Yeah, exactly. You kind of pray at that point because then the hardware fails you, there's not a lot of things you can do, which is tragic in an academic context, but it also could easily apply to the enterprise. If you've got a server that's chugging along and you keep putting new things on it that it's not supposed to run and it crashes, then your production environment goes down and it's a whole different thing and it's just terrible. (09:26): So, you just swap out that laptop for a server, right? But, this example shows why updating software on its own isn't always a good solution unless you update the hardware, too. And the other point he's making is that you really shouldn't be asking your hardware to do more than it's capable of. Now in some cases, those capabilities aren't so clear. In others, you know what they've been designed for and wouldn't be able to handle. Aaron told us about a company he worked with that had to interact with 911 emergency systems, which is based on technology called public safety access points, technology that was last upgraded in the wake of 9/11/2001. A lot of these systems rely on touch tone telephone, so you can press one to get to fire or press two to get to an emergency or to get to fire and emergency. That's a big thing that a lot of big cities are putting in because they have separate dispatching for police and fire and emergency medicine dispatch. So your first issue is if you don't have a dial pad, how do you put a number in? So once you figure that out, once you can figure out either looking up a tone generator or doing something else, you can get the proper tone entered. So that's an extreme example of hardware incompatibility, but this kind of incompatibility happens all of the time, right? Mainframes having to talk to modern computers, physically connecting two systems that were designed decades apart. Now, while a tone generator is a straightforward workaround for that scenario, that's not always going to be the case. So you have to make it work. And that's a lot of what we do with old hardware. We kind of limp along and limp along, and eventually it breaks down to such a point that you can no longer limp along and that's when it gets replaced. But you got to be careful what you wish for because that breakdown could be moving to something newer, but in the process it's going to cause a lot of pain right? Pain that our next guest is going to tell us could probably be avoided. Speaking of old hardware, I have a server that is so old here. Okay, you're supposed to say, "How old is it?" How old is it Angela? How old is it Angela? It is an HP ProLiant, I want to say 745. And it's so old like you can't even run— There's this virtualization software that I would love to run the latest version on it, but they don't even package the drivers— Oh no. In this software anymore. So I'm running this super old version of this hypervisor because why not? Until it breaks. We're going to ride it until the wheels fall off. And what's running on that machine? KVM. So I'm running a bunch of servers like Satellite and Ansible and web servers and Active Directory. It's like one of my lab systems, but it is old. It is old, old. But until it breaks— She's going to keep limping along until— Yeah. Yeah. That's right. Just like Aaron said. Yep. (12:41): We spoke to Michael Wells who's an engineering technologist with Dell Technologies. While he doesn't work with legacy hardware every day, he needs to keep it in mind when writing about potential upgrade paths and he understands why that's necessary. I mean, there's a lot to legacy hardware. There's a lot of reasons why that hardware hangs around. Sometimes it's because it's a mission-critical application that nobody wants to touch. It's moving that to something else is too risky or maybe it's sitting there and nobody even knows what it does anymore, but nobody wants to be the one to pull the plug and face the response from that. Angela, I'm kind of curious what your experience or what your thoughts are on this. Why is there so much legacy hardware hanging around in the enterprise? Two reasons. There's software running on it that cannot be migrated to a different version of an operating system, or it's tied to said hardware or two, people just don't have the budget. Depending on where and what vertical you work in, it may be hard to get new hardware procurement and you just have to run things. If it's not in the budget, you have to make it work. So, it's usually when things start going down, when people start to pay attention, how important it is to make sure that your hardware is on a pretty decent replacement cycle. You don't know how important it is until it goes down. Yeah. Do you have any insights as to why he said it might be risky to move something from one piece of hardware to another? I think the risk could be in the hardware that said software was built "to run on" and if you haven't done any testing or if the vendor hasn't certified different hardware, that could be a risk. It could make it non-supported. It could make it so that maybe there's something inside of the CPU that needs to be enabled or running that may not be on a particular CPU. So, you have to be super careful when you're pairing hardware with software that you meet the hardware compatibility list. You don't want to run afoul of the developers who wrote the software to run on certain hardware and operating systems because that can be cause for a whole lot of issues, as well. Sounds like there's a lot of, I'll say good reasons, a lot of good reasons why there's still so much legacy hardware hanging around. Exactly. Yeah, and I totally understand. I try not to begrudge people because I've been in those situations where it's like you just can't. You just can't. You want to, even the people who work there. I wish we could. But you can't. But we can't. So, it's painful. Well, in the meantime, if you're stuck with your legacy hardware, hopefully you do some of that poking around we talked about last episode before pulling out any wires and just seeing what happens. And there's probably a few steps that come between being faced with an older machine and completely taking it out of commission because as we've just heard, a lot of the time, ignoring older hardware is not a good option. The thing that keeps coming into my mind is, when is it okay to just leave that older hardware alone? And then also, how do you know when it's time to start a migration? Well, leaving it alone is sometime the only option. Again, we talked about budget, we talked about other constraints, but what happens when you leave it alone and something goes wrong? If hardware fails, if the hardware is so old, it's no longer under warranty, you can't get a part for it. You have to go to off-market to try to find a part for a server that's 14 years old. I imagine that's pretty expensive too, right? Very expensive and time-consuming. So if this is a system that needs uptime and it has some value to the organization, you're going to have to weigh, "Well, how much is that cost going to be to recover from said outage when you have to do all these things to bring it up?" So, unfortunately, some organizations, they don't know how important something is until it's down. And when these legacy systems break down and then you realize, "We don't have a motherboard for this. We don't have—" fill in the blank for this, a power supply, whatever, then you're going to realize how important this software is. Money might come from somewhere to replace said hardware if that's even an option. Mm-hmm. Yeah, because parts fail. I mean, everything breaks down in the end, right? I'm not trying to be a downer here, but it's true. Nothing lasts forever. So at some point, you're going to have to plan for that move. I mean, we're talking about the death of the hardware, right? Yeah. So what makes it so difficult to move bits from one machine to another? Well, the most common is architecture. So, predominantly, the x64 architecture that we use today has been around for a while, but that wasn't always the prevailing architecture. I mean, there's reasons why there are billions of lines of Cobalt still running in the world, and it's because these applications work. They're on different architectures. A lot of times they're on older mainframes. They're not traditional x86 or x64 architectures. So that's the biggest hurdle is trying to overcome that. Can you migrate that application to a newer architecture without having to completely rebuild it, and how much work is going to be involved in that process? Sometimes the best thing to do is just to leave it where it is. I don't think I quite understand that Johan. So Angela, correct me if I'm wrong here, but in short, architecture refers to the instruction set that programs can send to the hardware to complete tasks? That is correct. Okay. And so when software is custom written for that hardware, which is often the case for a lot of older machines, that can make the rewrite a really lengthy, complex, and therefore, expensive proposition. Okay, that makes sense. You got to throw the baby out with the bath water and start fresh. How do you do that, right? This is one of those things where if this is software tied to particular hardware, it's not just that simple where it's, "Oh, just buy a new one." These architectures don't stay around forever for a reason because we've gotten better with components and processing and some things just run because they were written to run on said hardware. Let's do a hypothetical here. All right. You have old hardware. It's failing. The parts are no longer being built, so you can't order anymore and rewriting the software is going to take too long or cost too much to be cost-effective. Sometimes there's an option in between with virtualization or emulation, which are different ways to either abstract away or mimic the older hardware so the program thinks it's still running on the old system when it's actually on newer infrastructure. It used to be pretty hit or miss, but it's come a long way. It's not anywhere near to the extent where it used to be. I mean, the old turbo button on the PC was because certain applications running on the newer processor would move too quickly because they depended on that CPU clock cycle for timing, and the only way to make them playable again was to give you a button where you could turn off the turbo power so that it became— We don't have those types of issues anymore. The operating systems have become a lot more genericized through that concept of a hardware abstraction layer trying to make the operating system a consistent environment regardless of the hardware that's underneath. Okay, so option one, virtualization or emulation? Yeah? Is there an option two? Option two, migrate the hardware. Option three, pray? There you go. You have options is what we're saying. We have options. Right. You have to figure out which one is the best for you, and we can't tell you what that is. You're going to need to figure that out for yourself. Now, Michael very nicely explained the best way to do that. The key is for the system administrators to know exactly what it is they're supporting and to understand what state it's in and make sure that they're raising any potential risks associated with continuing to run that system. This vendor doesn't exist anymore. That's a risk. So, quantify that risk. This software is currently used in this business process, this business process impacts all of these things. Right now, we are at a high exposure for this. We need to look at alternatives before that breaks down so that we can prevent all of these other things from happening. Risk assessment. Risk assessment. Indeed. Map it out. That was the job. You had to know what was in your datacenter. You needed to be able to keep track of all the hardware, all the asset tags, how long they were still under warranty, who they were warrantied under. You needed to know what was the service level agreement, the SLA, for the vendor like could you get parts same day, four hours, next day? You needed to know that. You need to know that about the software vendors as well like what version of operating systems it would be supported on. If it's on this currently old version, has it been ported and approved for a newer version? And how do you contact that vendor and where's their documentation? Like as a systems administrator, it was your job to know everything because it was your responsibility. That's a lot to keep track of. It is. It is a lot of movement. It's a lot of spreadsheets back then, or a really good CMDB (configuration management database), otherwise, how do you know? Well, Michael also emphasized that once you've got all that information and once you know all the risks, to write up some of the solutions too with costs and time estimates so that you can have a plan in place before you get to that crisis scenario where the hardware breaks down and you're scrambling to try and get it back up. That can be pretty daunting. And working with older hardware can be a real chore. I know I can get upset at devices that take forever to load something that could allegedly handle it not too long ago. Now, we're going to get back to Aaron because he's got a piece of advice for us to keep in mind during those times of distress. The idea that this ancient cell phone that you're holding is using—it's essentially screaming at the air and then hitting a microphone. That scream is being repeated and going into someone else's device that they're holding in their hand. That is at the most basic, you're yelling across the world and you can hear it. That is a marvel of technology and as horrible as the user interface might be, that's still a marvel of technology and at least that helps. Angela, you seem pretty skeptical there. Yeah, I didn't get the analogy. Make it make sense. You're talking about an old phone and then you're— Yeah, he's saying if you've got an old phone like maybe you've got an old brick or older smartphone that isn't running things as well as you expect it to be, to just kind of take a step back. And if you're able to even make a phone call on it still, it's still performing a function that is valuable. It's still magical. It's still pretty magical in the app right? Okay, I got it. Thank you. Thank you for that. No, you're welcome. I wasn't getting the analogy at all, but yeah, you made it make sense for me. Thank you. Oh, I'm glad I'm able to do that too for you. So, let's bring it all in. Old hardware can cause problems, even if it's still technically working properly. It might not be getting replaced anytime soon, even though it's beyond its intended end of life. Let's learn how to work with it, work around its limitations, and make a plan for when a device actually stops working. Now, next time on Compiler Legacies, Kim Huang investigates the curious case of Hadoop. Well, this was the hardware episode. We're talking about legacy, and I know all of our listeners have some experience with legacy hardware. We need for you to share your thoughts with us. What type of legacy hardware are you running in your office or in your home? Tweet us at Red Hat using the #compilerpodcast. Even post some photos. I would love to see some pictures of some really old workstations or servers or some sort of equipment somewhere. I would love to see some of the legacy hardware that some of you are running still. And that does it for the hardware episode of Compiler Legacies. Today's episode was produced by Johan Philippine, Kim Huang, and Caroline Creaghead. A big thank you to our guests, Aaron Lee and Michael Wells. Victoria Lawton pushes all our turbo buttons. Our audio engineer is Robyn Edgar. Special thanks to Shawn Cole. Our theme song was composed by Mary Ancheta. Our audio team includes Leigh Day, Stephanie Wonderlick, Mike Esser, Nick Burns, Aaron Williamson, Karen King, Jared Oates, Rachel Ertel, Devin Pope, Matias Faundez, Mike Compton, Ocean Matthews, Paige Johnson, and Alex Traboulsi. If you like today's episode, please follow the show and write the show. Leave us a review and share it with someone you know. It really helps us out. All right, we'll see you next time. Take care everybody. Hardy, har, har.

Sobre el podcast

Compiler

Do you want to stay on top of tech, but find you’re short on time? Compiler presents perspectives, topics, and insights from the industry—free from jargon and judgment. We want to discover where technology is headed beyond the headlines, and create a place for new IT professionals to learn, grow, and thrive. If you are enjoying the show, let us know, and use #CompilerPodcast to share our episodes.