Skip to content

Latest commit

History

History
767 lines (384 loc) 路 75.2 KB

ship-it-101.md

File metadata and controls

767 lines (384 loc) 路 75.2 KB

Justin Garrison: Hello, and welcome to another episode of Ship It. I am your host, Justin Garrison, and with me as always is Autumn Nash. How's it going, Autumn?

Autumn Nash: So excited to talk to Mandi. She is amazing.

Justin Garrison: Yeah, Mandi is a wonderful guest on the show today, and this is what I'm calling our retro series; whenever we're talking about something that's 15 years or older, it's going to be amazing. And today we get to talk to Mandi about AOL chat rooms.

Autumn Nash: First of all, that was my first exposure to how much I love computers and the internet, and finding your people. AOL was it for me; that was something I used so much of as a kid, that got me hooked on computers and the community that you can get through your nerdy habits... So I'm really -- not just that, but her personality is amazing. It is hard to find someone, for one, that's a woman that worked in infrastructure, and resiliency, and all that good stuff. And then the fact that she's just hilarious. Like, her personality is fire.

Justin Garrison: So we will get to that. And actually, the more I think about it too, AOL chatrooms are kind of the original social networks. It was the original place...

Autumn Nash: It is...! That is the original place to find nerdy community.

Justin Garrison: There was plenty of other BBS'es and things that that people were finding, but that was the --

Autumn Nash: The first mainstream-ish...

Justin Garrison: Right. The barrier to entry in the '90s was a lot higher. And so once that endless September sort of thing came around with IRC, and then people saw "I can just jump online, do this dial-up thing..."

Autumn Nash: I think it made it accessible, and I think that people didn't know those communities existed, and then all of a sudden -- this is before Reddit, this is before Tumblr, this is before all the other things became a thing and got popular. But this is the first exposure to, for one, meeting your besties online and not knowing them before.

Justin Garrison: Right. You didn't know who the person was at all. No pictures.

Autumn Nash: Yeah. And then meeting people based off of interest, right? So just purely off of interest, people that you've met from all over the world. This was the first exposure to that, really.

Justin Garrison: Yeah. And I think in a lot of ways the online social networks are reverting into these smaller knits...

Autumn Nash: It is. We went from the globalness of Twitter, being able to find people from all over just based off of what they're saying... Back kind of into the little corners of community, because - I don't know, the erosion of Twitter...

Justin Garrison: Well, I mean, it's Twitter, but also just realizing that your network - you can't keep all those connections. As much as you have thousands of people you may on some social network, it's like "Actually, I just want to go have a DM with someone for a while."

Autumn Nash: I don't know, I think it's definitely easier to post pictures of your kid on Facebook, so everybody can see them, and then you don't have to go and DM them to each person, and...

Justin Garrison: As a distribution method, absolutely.

Autumn Nash: But not just that. But keeping up every now and then -- I think it's really easy as an adult to forget this person exists, or you get so wrapped up into getting your kids around, and your job... It's really hard to always kind of -- social media means you can comment, and be like "Oh my God, your kid got so big", and "Oh, congratulations on that promotion", and you get that little bits of keeping each other in each other's lives, until you get more time... And I think that we are risking losing that. There are good parts, but at this point, Facebook is so toxic. Twitter is getting -- have you seen the amount of women who have been harassed on Twitter in tech in the last week? The sad, angry dudes of tech are in force. This whole "Blame everything on DEI" has gotten out of hand. It is ridiculous. And Elon is just lighting the fire.

Justin Garrison: I have been thinking about joining a BBS. I've found out there's BBS'es that still exist.

Autumn Nash: What is a BBS exactly?

Justin Garrison: It's a bulletin board system. It was like the OG like Reddit sort of forums. It's just a forum basically, but old school. And I read this book called "Broad band", which is two words; broad as in woman, and then band like a group of women. And it was about the women who created --

Autumn Nash: I love play on words like that.

Justin Garrison: It was an amazing title and a great book about women who created early technology, and the internet, and these foundations. But I found out there was someone who runs a BBS in New York. She ran one of the very first BBS'es, and it's still available today. If you want to sign up, it's a monthly subscription, you pay, and they send you a packet in the mail with your sign on.

Autumn Nash: This is why I love that you're my tech friend, because you're so not like average tech bro.

Justin Garrison: I'm pretty mid according to my kids, so...

Autumn Nash: [07:46] You're an awesome husband that makes cute stuff for your wife, and you're always reading and informing yourself somewhere... Some people just like to virtue-signal, and we're like "No, please don't. Just stop." But I think this segues really well into my article about why women in tech spaces are shutting down. Women Who Code is shutting down, one of the Portland's women's group is shutting down... She Geeks is shutting down... And Women Who Code - almost every female tech accounts on Twitter had something to say about how much Women That Code has had an impact on their career, and getting them in. It is one of the biggest tech organizations. For one, I'm always constantly sharing their tech jokes on Instagram. They're fire. They help so many women; they've done so many classes, they do so many summer camps for girls... They have so many speaking opportunities, so many scholarships... This is one of the biggest, and it is shutting down because of lack of funding. Because the first thing that happened when tech was zero interest rates and making hand over fist -- well, they're still making hand over fist money. But when we got into the supposed tech recession, the first thing they cut was DEI, recruiters and any initiative to get people into tech that are not your average people getting into tech. They were the first teams to go. And then the funding, from all the tech companies that were going into these initiatives that were any kind of - what was the word? I guess donations... Those got cut, too. So it's really sad to see such an impactful organization close down.

Justin Garrison: I saw that and I was like "What happened?" From the outside, not being a part of it, it seemed like it was successful and it was growing and things were happening... And then realizing how much does depend on corporate sponsorship. And in-person events are so important, where during COVID it was like "Oh, great." I naively would think that "Oh, cool, virtual events are more accessible for people, and it's easier for anyone to join, at anytime, from anywhere, without travel."

Autumn Nash: I do think they are though. I think that they're extremely important.

Justin Garrison: Yeah, there is a lot of importance there. But also, there's a lot of importance on that in-person connection. And when I think of every job that I've had in my past, it came from meeting someone in-person somewhere, at a meetup, or a conference, or...

Autumn Nash: There is a good medium, right? So I think smaller meetups are really good online, because people don't always show up to the ones that are smaller... If you have a monthly meetup, even if I think you can do them in person or online... But I think for one, with women, we're typically more caregivers, we're typically... A lot of men will have wives that don't work, so they can kind of pick up the extra... But very few women have husbands that don't work, that do the caregiving. So even if you both are switching off, it's still a very different situation.

So I think, any type of virtual, it's better for moms, it's better for disabled people, it's better for military spouses, it's better for accessibility for people... But I also think that in-person community -- one thing I really noticed is React Miami, I've never seen so many women in pictures. I've never gone, but a lot of the frontend conferences seem to also have more women there... Which is sad, because the conferences that would be more for my work, you look and there are the same people. But you know what I mean? It is a very distinct difference. Scale was the most girls I've seen it in a bathroom at a tech conference besides Grace Hopper... And it wasn't that even --

Justin Garrison: Being at Scale and being part of the committee, sponsorship was down. And that was difficult. That made it hard to run a conference at that scale at that, at that size, to be able to [unintelligible 00:11:31.19]

Autumn Nash: And Scale is not even that big. You know what I mean?

Justin Garrison: No, it's a pretty small-ish community-run conference, but it is still primarily in-person. There are live streams and we make some of that available. We don't have the bandwidth or the ability to do the online networking with folks. Like "Oh, come hang out in this room." And from my experience, a lot of that just never happened anyway. A lot of folks didn't stick around. They're like "Actually, I have something else to do." And if you're at your computer, you're going to be distracted anyway... And so it's like, what's kind of the point there? But also from a speaker's perspective, I have done virtual events where literally no one showed up.

Autumn Nash: [12:05] I've actually had really good turnouts for virtual events.

Justin Garrison: That's great. I remember I worked on one for weeks, and it was just me and the person who would introduce me, was the only person that streamed it with me. I felt so broken...

Autumn Nash: I think we have to remember though that those recordings last forever. For instance, with Military Spouse Coders, because we're all in different time zones, a lot of times our attendance will look not a lot of people are showing up, but we get a ton of views after, because people are in different time zones... And sometimes if you're chasing a kid, or if you have a doctor's appointment, you can't watch it then... But they'll go back and watch it at night. We get a ton of views when everybody is settling down and they have a second to sit and kind of go over the material. So I think it also makes for consumption when people have the time. Even when I go to an in-person conference, sometimes I'll go back and relisten to a talk that was really good, so I can take better notes...

Justin Garrison: I am one of the few people that will watch conference talks on YouTube, but I also know that a ton of views is not much. Most of the conferences will have a dozen or so views. It's not hundreds, it's not thousands. The really large corporate sponsored events - those will get some stuff, because they put ad money behind it... But a lot of this stuff that's like "Oh, if I have the choice of a slimmed down YouTube-ified version of that talk that's 10 minutes long, or the one with slides where someone's talking to me for 40 minutes - I'm gonna pick the YouTube one, 10 minutes, every time." And that's where if you're on those platforms, and you're in those ecosystems, it just makes it easier to kind of consume the snackable content. I'll scroll through a bunch of shorts before I even click on that one that was 10 minutes long. All of those things are trade-offs.

Autumn Nash: I think it depends, because some people never -- well, I won't say never, but they're less likely to get that experience. If you don't have the money to be able to go to those conferences, or to be able to kind of take off work... A lot of students may not come from the background where they can go to those conferences, or take time off work. They could be caregiving for their families, they could be working a job just to be in college... So I think definitely at a certain point yes, you get there... But some people don't even have that opportunity, and that's the only way they have access to it, so I just think it depends on perspective.

Justin Garrison: And I absolutely think that the information should be available, and we should make that freely available. And I'm even looking at pulling a lot of my own personal content off of platforms like TikTok and YouTube and making it available on my site without ads. And "Hey, this could just be on my website." You don't need to even spend the time commitment of "I have to watch an ad to do this thing." If you want to get it for free without advertising, I'm going to try to make that stuff available for people. And that's super-important too, but obviously, discovery at that point is hard, and people don't know where to look.

Autumn Nash: I really think that is one of the best things about tech at one point, is the fact that it's so accessible to be able to get free content online that you can do in your own time... And that gives people the opportunity to be able to get into tech when we weren't doing it as heavily gatekeeping as we kind of are now.

Justin Garrison: My article is "Executing crons at scale." It's all about cron jobs at Slack, where Slack has these abilities to run reminders, and jobs and things that run in the backgrounds... And I loved how this article started off with just like "We had a cron server." Like most every large company in the world, there is a cron server that sits there somewhere that has a cron tab, and people modify the cron tab to run their jobs. And at some level, that becomes not good enough, because the server doesn't scale, too many conflicts, errors are difficult... All this stuff just gets in the way. And I remember my time at Disney Animation - we had to cron servers. That was how we scaled it up. We're like "Oh, well, this cron server was for this group of people, and this cron server was for that group of people", and we just scaled it up that way. You SSH in, you modify your cron tab, maybe you check it into some sort of config management... But in general, it was just a server that was always running, to run jobs whenever.

And in this case, they decided at Slack that they needed something better, and something a little more scalable... And so of course, Kubernetes was the answer. And not just Kubernetes with a custom job scheduler; like, this has a full-on Kafka queue, Vitess, which is like a distributed MySQL database, and a custom scheduler... As well as their custom platform on top of Kubernetes.

[16:19] So I think it's really interesting... Once you step beyond the machine, what components do you need to make this scalable or usable for a large company? And especially, I like these ad hoc jobs, because I can set up a Slack reminder for any time, so they can't predict these things, and it will send me a message. And that literally gets queued on their job scheduling system, and then comes back to me as a message. And so I've found just the practicality of how that works to be really interesting.

Autumn Nash: I find this interesting for one because I think Slack seems to be a lot of people's favorite alternative to keeping in touch with people. I think Slack seems to me like it's more loved than Teams and other alternatives. I love Slack, personally. And I think it's cool that they've rolled out so many different services and ways that you can use Slack as more than just that way to keep in touch with work and with friends and yadda-yadda. But I also think it's really interesting that it's built in Golang, because - did you see that tweet that everybody was arguing about, like if people should go to Rust, and JavaScript, and I think another language, because they were like "All the other languages are not going to be used, and everything's going to be built in TypeScript, Rust and one other thing." And I was like "What?! Do you know all the infrastructure that's built in Java and C++, and a million other things?" I actually think Rust is going to be very impactful, because so many things are going to be rewritten in it, like the Kernel... But how would you just completely forget that Go and other things existed? And there's so much legacy software that is built in Java and C++ and C... I was just "How?!" PHP is gonna live on after we're all dead and buried. So it's cool.

Justin Garrison: There's plenty of COBOL and Perl running out there, so...

Autumn Nash: And the people that write COBOL are getting paid right now... Because they're the only ones left.

Justin Garrison: All [unintelligible 00:18:15.06]

Autumn Nash: Interesting [unintelligible 00:18:19.21] That's pretty cool.

Justin Garrison: So Bedrock is there... It's actually their abstraction of Kubernetes. I get the name confused with -- there's other tools [unintelligible 00:18:27.07]

Autumn Nash: Ooh, okay, okay.

Justin Garrison: They have a platform built on top of Kubernetes, which makes sense with Go, because all the Kubernetes tooling is Go anyway. So you have your Kube builders, and your scheduler stuff is all Go-based... And so it's just like "Well, let's just pick that." And so yeah, they have a platform on top of Kubernetes called Bedrock, and then they built this messaging, queuing system with a scheduler.

Autumn Nash: Oh, I've never heard of [unintelligible 00:18:47.19] I'll have to check that out. Have you ever used that?

Justin Garrison: Nope.

Autumn Nash: Well, it's used to manage [unintelligible 00:18:53.03] Okay, cool.

Justin Garrison: Right. Usually, you touch a file and say "If the file exists, then you don't run." And so yeah, it's a way to do that better.

Autumn Nash: It's pretty. I'm always looking for new Linux tools to try out.

Justin Garrison: So let's go ahead and jump into the interview with Mandi and talk all about AOL chat rooms.

Break: [19:11]

Justin Garrison: Welcome today Mandi Walls. She is a DevOps advocate at Pager Duty. But Mandi, first, I want to start off with asl.

Mandi Walls: Yeah, right?!

Justin Garrison: If anyone knows what that acronym means, it'll bring you back to a time that we want to talk to Mandi about... What were you doing -- how long ago was this? 20 years ago or so? What was the infrastructure you were responsible

for?

Mandi Walls: Almost exactly 20 years ago.I started at AOL in the summer of 2004, and I was there until 2011...

Justin Garrison: Wow.

Mandi Walls: And in that the space of that time I ran AOL's channels. So News, Sports, Entertainment, Games, games.com, MoviePhone, I ran aol.com for a while... And in multiple platforms these things migrated. So yes, [unintelligible 00:20:04.09] It's now being torn down... But yeah.

Autumn Nash: That was amazing.

Mandi Walls: All gone. All gone.

Justin Garrison: How did you get started there? What brought you to AOL?

Mandi Walls: I was working at the National Institute of Health. So we were down in the DC area, and I was at NIH, working for HGRI, which is the Human Genome Institute at NIH. And we were doing a combination of Solaris and Linux stuff... And I'm a Linux person, and Solaris is a -- it is what it is.

Justin Garrison: It was a silence there, that's what it was.

Mandi Walls: Well, I was like "Is this a sweary podcast...?" And then AOL advertised "Hey, we're looking for Linux administrators", because it turned out they were moving off of commercial Unix, onto Linux... And I was like "Oh, that sounds more interesting than running a bunch of scientific software." Which - human genome is super-interesting, but it's its own pocket. It doesn't move quite that fast; it wasn't at that time. Yeah, and [unintelligible 00:21:09.12] joined at AOL. It was also much closer to home, because I was living in Reston, and NIH in Bethesda. So that meant taking a horrible DC beltway to work every day.

Autumn Nash: That sounds not fun. Not even a little bit.

Justin Garrison: So bringing this back 20 years, what did the tech stack look for an AOL chat room?

Mandi Walls: Yeah, so AOL had a mixed bag... So it's interesting, because at the time no one really talked about what they were running. Everything was very secretive at that time. And even to the point where if you wanted to know what was going on at AOL, you probably had to read Kara Swisher's columns, to find out what was going on... Because things were just kind of secretive there. But the platforms behind all of AOL's products were all significantly different, which was super-weird, and really just bizarre. There was no consolidation at that time. Everything had kind of gone its own direction for the things that it needed... And they had bought other stuff; like, Prodigy was in there, and it ran on 36-bit machines, and you're like "Where did these even come from? Why is this still here?"

Justin Garrison: [laughs] 36-bit?

Mandi Walls: Right...? It was super-weird... And there was just other weird stuff in the mail system... And chat has its own infrastructure, and then we worked on the website, so I was in web operations... And our stuff had been a combination of Solaris and IRIX.

Justin Garrison: Okay...

Mandi Walls: Because if you've got money to burn, you might as well by IRIX. And they were moving everything onto commodity hardware, into at the time Rel. So early versions of Rel, so 2.1 or whatever that first release was at the time. And that's what we were hired on for. There was a couple of us that were all brought in around the same time, 2004, to help with the Linux side of the house, and we just stuck around. But yeah, it was a big mix; a lot of different stuff behind the scenes over there. Because everything was built at different times, and as they add new features, they just built whatever worked best for that platform, and off it went.

Justin Garrison: Whatever they knew, right? ...from some other experience of like "Oh, I played with this last week, so I'm going to deploy it to production now."

Mandi Walls: Yeah. And off it went.

Autumn Nash: I feel like that still happens a lot, though. Companies, especially big enterprises, don't let their teams talk to each other, and then they just end up building -- there's six different databases...

Mandi Walls: Yes.

Autumn Nash: You're like "But why? You could share knowledge about the one", or especially if it works for --

Mandi Walls: "Or you could have your little empire over there, and I can have my little empire over here, and we can battle it out."

Justin Garrison: Never will they war. This is not --

Mandi Walls: Yeah, right?

Justin Garrison: But I mean, you've just described microservices to some extent. It's just like "Oh, this is just like madness over here, and now we consolidated, and now we went back to madness."

Autumn Nash: It's just wild they're not allowed to talk about it... I'm like "You could have got advice, or something." I don't know.

Mandi Walls: Not the way things ran at the time. Super-crazy.

Justin Garrison: [24:09] So you had this mishmash of infrastructure and tooling, and you're moving it onto Rel... And what were you responsible for in that migration? Were you doing provisioning, and Linux servers? I'm assuming this is hardware stacks, and you have data centers, places, and...

Mandi Walls: At the time we were buying a combination of -- well, they'd put everything up for bid, because you're gonna buy half a million dollars of hardware at a time... So you'd get like a six-month bid-out on whatever they're going to put in the data center. So sometimes we'd get Dell machines, sometimes HP would win the bid. So you'd be flipping back and forth; we'd have a mix of hardware, and mix of ages, things would go back on lease return... A lot of the gear at that time - they're in owned data centers, but the gear is leased, so they'd go back. And so you're just constantly refreshing the farms, and all the fleets were constantly in motion for things coming in and out... And if you needed to scale anything up - that's a requisition; it's not a "Slide your credit card in the cloud and get more gear", it was "Oh, it's a ticket, and four teams are involved, and there's all this budgeting..." And if you happened to get extra hardware, you'd hide it in a different project for a while...

Justin Garrison: You don't tell anybody... [laughs]

Mandi Walls: Right? So you didn't have to return it... And it would just kind of sit there; nobody's gonna notice... "There's four or five machines over there, just in case we need one." So there's a lot of begging, borrowing and stealing of systems around the system, because there's just -- we could not get capacity onto the floor fast enough for the way things were being built out. It was just absolutely nuts.

Justin Garrison: So this is 2004. CDs started disappearing from supermarkets in the late '90s or so... So this wasn't dial-up days. This is like AOL, post--

Mandi Walls: Well, dial-up was still printing money at that time... But yeah.

Justin Garrison: But you're moving into -- like, you have these services... And what kind of capacity are we talking about? Do we have hundreds of machines, do we have thousands of machines? Do we have dozens of data centers? What sort of scale?

Mandi Walls: All of that!

Justin Garrison: Okay. AOL was big. It was THE thing.

Mandi Walls: Yeah.

Justin Garrison: And now I read white papers that are like "Oh, we have 5 million hosts over here." I'm like "What?! That's a different number."

Mandi Walls: There probably weren't 5 million hosts on the internet in 2004, right?

Justin Garrison: Yeah, exactly.

Mandi Walls: The capacity constraint was so different. But yeah, 2004 was sort of the beginning of web 2, so the beginning of what we call the portal era. So Yahoo, and AOL, and that stuff... Aidn Google was just kind of rising at that point. So part of the insanity was we had our own web server. So the AOL server was written in C...

Justin Garrison: Oh, like you had your own-own.

Mandi Walls: Yeah, exactly.

Justin Garrison: It's like "There's no Apache here."

Mandi Walls: No, not at all. Not until 2008 I think is when we started going to Apache. So yeah, it was AOL server, which is C core with TCL as the user language. So TCL, Tool Command Language... If you're not as gray as me, you probably have never seen it. Its other claim to fame is that it ran TiVo. The TiVos were programmed in TCL. So it was AOL server and TiVo, done in TCL. And yeah, so we're porting all that stuff over from the Solaris boxes onto Linux boxes, and spreading it out, because the capacity at the time was kind of stranded. So AOL had regional data centers, and they were large, and they were owned. That was the big deal at the time - you'd owned your data center. And we were getting into -- as things were growing in capacity.

So at the time, aol.com was like the sixth-largest site on the internet. It was big, so we were spreading things out, trying to collocate things closer to users... And this is at the same time the rise of global DNS sharing... So Akamai was the commercial provider of the time for that stuff, where you'd go to www.aol.com and it would point you to the closest place. And that was Akamai handling all of that stuff.

[28:08] So we had hundreds of servers in a dozen locations to serve the US... And there were these little pods that ran with all the stuff you needed on the backend as well... Because when you log in as an AOL user, it knows all this stuff about you. So it knows who you are, and what you like, and what you wanna see on the homepage.

Justin Garrison: And if you have mail.

Mandi Walls: And how much mail you have. All this stuff. So we had to bring all that stuff with us when we'd load up these localized data centers. So there's -- yeah, there was a whole lot of stuff all over the place at that time. And all these -- the owned data centers, the big ones, and then these colos. So it was just crazy. There was just a lot of stuff everywhere.

Autumn Nash: Mandi, I love you. I don't know you, but we're gonna be besties. Do you know how hard it is to get people who work in infrastructure with a personality? Like... [laughter]

Mandi Walls: Oh, we could talk, yeah...

Autumn Nash: Can we just talk -- you said pour one out for the... Like, we are right here. I just -- I love you, and where have you been my whole life? We're gonna be besties. Obviously, cloud and on-prem have their place, right? But because you were in the trenches with on-prem and with building infrastructure 20 years ago, is there ever a time when people get -- you know how sometimes we think back at the past and we're like "Oh, it was great", but it wasn't great? You just made me feel so grateful for the fact that I started tech in the cloud... Because like "Yo..." That's a lot. So is there anything when people say stuff about reading infrastructure on-prem and they make it sound easy - do you ever side-eye them and you're like...? [laughs]

Mandi Walls: Oh, absolutely. Like, if you haven't been running a crash cart down the cold aisle, trying to plug in and fix something, you haven't lived. But also, I feel bad for you. It wasn't fun.

Autumn Nash: I love you so much...!

Mandi Walls: It wasn't fun, man. It wasn't fun.

Justin Garrison: It was a great experience, but "fun" was not a word for it.

Mandi Walls: Not at all. We learned a lot of lessons. That was the learning period. We know what not to do. There's a reason people love the cloud, is because this other stuff is mayhem. And it's just crazy.

Autumn Nash: That's what I'm saying. They both have their place, and there is a point where on prem just makes more sense. That's just how it is. But sometimes I feel like we romanticize things a little bit, when we get too far, and --

Mandi Walls: Yeah, infrastructure people like the control of being on prem, and being able to artisanally curate their switch ports, and all this stuff...

Autumn Nash: It's like when people start making coffee, and they do pour over, and they're like "Because I need it to take 35 minutes", and you're like "Bro, you could have just made an espresso."

Mandi Walls: Life is too short for this.

Autumn Nash: We're a culture of sometimes liking control too much into misery... When people are like "I want to control my own servers for social media", and I'm like "Dude, I have to do that at work. I don't want to --"

Mandi Walls: [unintelligible 00:30:56.18]

Autumn Nash: I think we get to a point where people really romanticize too many options, and then I'm like "You know what, I've got a whole life, and maybe I don't need all of those options."

Mandi Walls: Right? Totally. It's definitely like that. Yeah.

Autumn Nash: I love you. We're going to be besties. You're so funny. Okay, what is the craziest thing that happened to you when you worked at AOL? Did you ever have like a big outage, or...?

Mandi Walls: Oh, absolutely. What was the worst? Tell me the best horror stories. Because I just want you to know that you've made 12-year-old me so freaking happy. I was sneaking on the internet at 10 o'clock at night when my parents went to bed... The sound of AOL starting up is the sound of my childhood. You made my whole little teenage nerdy finding friends on the internet life.

Mandi Walls: That's cool.

Autumn Nash: You powered my teenage years.

Mandi Walls: [31:50] The irony is I was never an AOL user before I joined AOL. I had no experience with the service at all, because it was a long distance phone call from my parents' house to the local pop. So that was not going to happen from my parents' house. But yeah, the biggest outage we probably had, the one that still gives me nightmares - so one of the deep configurations in AOL's server is that you can actually get into it and see what each individual thread is doing. It's super-cool. It can tell you exactly what request every thread is serving. But you can also then see "Hey, all my threads are full. What is going on?" And then you have to get into the configuration and tweak how many threads there are.

So when we were doing a deploy to aol.com - and I think we were in five or six data centers at that time - and you drain one, load the software, and pull it back up, and then it rebalances on the global DNS. Well, it would load up, and then the threads would fail. You're like "What is going on? Why are the threads full?" Something in the new software is just a little bit too slow, and it turns out we only had 10 threads available on every server... Which is not enough. At the time it seemed like "Oh, 10 threads in there... It's pretty quick", but it slows down enough that it would block the entire thing.

So what you'd get is a stampede. So one datacenter would fall over, and then all the traffic would swing to another data center, and then that data center would get flooded and fall over, and all the requests would failover to another data center. So you could watch all the traffic sort of spike all over the place, until we got a fix pushed out to get more threads into all the systems.

So it was fascinating, but also kind of a nightmare, because we were dealing with push-based SSH in a loop really to get all these configs out to all these systems, and then finding -- we weren't using any version control for any of this... Like, come on, it's 2006 or 2007. That really wasn't gonna happen. And so there's certainly no configuration management, we weren't doing any of that cool stuff... So yeah, we were just sitting there, waiting in a loop for all this to fix itself. So the whole farm would quiesce, and all the services would come back up. So that was a bit of a nightmare. It took about half an hour to get the whole thing straightened out.

Justin Garrison: And that question I had was "How did you do those deploys?", and it was basically - because there was no version control, there was no config management, no such thing as containers... So it was just like "I have a file, it works on my system... SCP it to every machine", right?

Mandi Walls: One hundred percent. And they're all bare metal. It's all bare metal at that time, too. There's no VMs, no containers... Everything's bare metal, everything's individually IP-ed, off it goes, and you have to push to each one.

Justin Garrison: So you had your CSV file of all your inventory.

Mandi Walls: Yeah, we had a machines.dat. That was its name. And it was a text file, it was space-delimited, so whitespace...

Justin Garrison: Yup. So you're [unintelligible 00:34:41.09] those fields, and you're just like "Go!"

Mandi Walls: Pulling that out, piping it right into the loop, and off it will go. Yeah, it was crazy.

Autumn Nash: All the bad words I say to Git, you're making me really grateful for it.

Mandi Walls: Yes! Yes! Be thankful for Git. Be so thankful for Git.

Autumn Nash: I've said some really mean things to it, and now I feel like I need to go back and apologize.

Mandi Walls: I know. It's karma, right? It comes back to bite you. And that one experience was a big part of why I went to Chef after I left AOL, because I was like "There has to be a better way to do this."

Autumn Nash: But I feel like you were -- like, I didn't know you when you worked there or what you did, but I felt like your voice and having your voice in those rooms were probably like fire, because you were in the trenches...

Mandi Walls: I'd be on mute. I'd be on mute a whole lot. Yes. I mean, we had definitely different outages where you'd be on a headset, like a battery-powered headset or whatever, and you'd be on it so long that the headset would die... There were some dark hours.

Justin Garrison: How big is the team that's running all of these services, and web services for that?

Mandi Walls: It would vary. Four to six, eight at the max... Like, these are little teams. The engineering teams are huge. We ran the channels, which was called Big Bowl. So if you've ever been to Chicago, there is a place called Big Bowl; it's a restaurant. There's also one in Reston. And that's where they came up with the concept for this. That's what they called the product, it was a Big Bowl. We ran 200 DNS names or so, 70 channels across it...

Autumn Nash: [36:08] I felt eight people was not enough for that.

Mandi Walls: Right? Hundreds of developers dealing with this thing...

Autumn Nash: You're giving me anxiety.

Mandi Walls: Itty-bitty teeny-tiny operations team to deal with it. But that was the thing with the monoliths; you could observe all of this stuff out of one big spaghetti mass of code, and then hope that the handful of people on the other end could figure it out when you screwed it up. So... Yeah. It was little teams; very small teams for all that stuff.

Justin Garrison: Yeah, no developers on call... What even was -- on call, you'd have like a USB [unintelligible 00:36:44.28]

Mandi Walls: I'd have a pager. Like a legitimate, actual real-school Motorola pager that we were all assigned. And the NOC would call us. So AOL had a NOC, which - you have to at that scale, really. And those folks were on all the time. They were based in Columbus at that time.

Autumn Nash: What is a NOC?

Mandi Walls: Network Operations Center.

Autumn Nash: Interesting. Okay.

Mandi Walls: So they're 24 by 7, 365, and just rotating teams, watching the blinking lights. If anything goes down, they're on the [unintelligible 00:37:16.07] They're calling up on the phone, "Something's down. Can you log in?" You're like "Well, yeah, I guess so. I was eating dinner, but whatever... Yeah..."

Justin Garrison: Did you use AOL chat rooms for coordination on your teams, or anything? Or was that too --

Mandi Walls: No, there were some weird shortcomings with the chat rooms, in that you couldn't really put them together for teams. That made it super-hard for us to use our own products to actually talk to people. So think about Slack and stuff today - it's super-easy. You can add as many people as you want to to a channel. You could really do that with the AIM stuff. So we'd use it for person to person, but we had our IRC channels that we ran internally to talk on teams.

Justin Garrison: Yeah, I was gonna say [unintelligible 00:37:57.10]

Autumn Nash: That's crazy. I didn't think about that. But I don't think I've ever talked to -- unless you were in an actual chat room... I don't think I ever talked to people in groups on AIm, so that's crazy. Slack definitely spoils us. I can't even go to Teams. Slack has ruined me forever. I have friends Slack channels. Like, whole slacks just for friends.

Mandi Walls: Of course. Yes.

Justin Garrison: Every old company that I went to, I have like an old coworkers Slack.

Autumn Nash: Yes. I've got like a nonprofit Slack, and then we've got like a friends chat... Slack has ruined us all. They know they've got us.

Mandi Walls: Absolutely. So easy.

Autumn Nash: It is. Oh, man, that's crazy, just thinking about the fact that you couldn't even use AOL to do your stuff internally. It's just...

Mandi Walls: Yeah. And there was other stuff that was -- if you think about it, the product AOL Mail was very much consumer-focused. But we're a tech team at that time, and we want procmail rules to move mail around, and all that stuff, that you couldn't do with AOL Mail at that time. So even then, operations had our own mail server on a different subdomain, and that's where we kept all our mail. It was just so divided from the customer experience; probably not the best way to do that if you're really down with preserving things from the customer... But the consumer products weren't suitable for the users on the tech side.

Justin Garrison: It's interesting, you're describing this wave that we see over and over again, in any technology, where it's like "Oh, this consumer thing is great, and it's mass-adopted, but it's not flexible enough for the power users, for people that really want to dive deep into it", and so we switch back to this "You have to run that yourself." And a lot of people ran their own mail servers for a very long time, because they needed that power, they needed bigger scale, whatever... And then it consolidated, and we're like "Oh, now guess what? The consumer products get some of those features", and bring some of that power into "Oh, Gmail can just add filters for me", and I can do that routing, all that stuff.

[40:01] And Then I'm wondering what the next shift is going to be; what the next gap in consumer features are, that we're like "Hey, guess what?" Maybe at some point the cloud makes things boring and easy, and then you're like "Oh, but I can't do the thing that I need to, so I have to go buy a datacenter, or buy some hardware", that sort of stuff.

Mandi Walls: Yeah. It'd be super-interesting.

Autumn Nash: I think we're already at that point. Look at how much people like running their own servers for Mastodon, and stuff.

Mandi Walls: Those people are weird...

Autumn Nash: I remember being behind a startup, and in line for an observability booth, and people were talking about running servers in their grandma's garage... And I'm just like "Are we back to this?" I'm like "I feel like we've already done this and bought that T-shirt. Is this cool again?"

Justin Garrison: I mean, I never stopped, so I don't know what you're all talking about... [laughs]

Mandi Walls: No, I don't run anything at home anymore. I used to have a whole bunch of stuff, but I've moved a whole bunch of times, I've moved abroad for a while, and I came back, and I'm like "I think I'll just put all this stuff in the cloud. I don't need to have it at home anymore."

Autumn Nash: See, all my stuff's in the cloud, but my kid wants to run stuff on a Raspberry Pi, and I blame Justin osmosisly...

Justin Garrison: I mean, I've been running a home theater PC of some sort and a NAS at my house since 2005.

Mandi Walls: Oh sure, yeah.

Justin Garrison: In college we had them, too. We had them probably in 2003. And ever since then, I just kind of got hooked, and I've just run them ever since. And it varies in what I'm running, and what hardware and whatnot, but there's always something that runs locally, and I have backups and storage and stuff like that... And I do own some of that. And I don't need all the power. I want the consumer version of it most of the time... I'm just like "I just want something that works." But I paid for it once. My Plex server and my Synology - it was like six, seven years ago that I paid for it upfront, and I'm just like "Yeah, it just works." And we're fine.

You described outages, you've described a little bit -- all of your updates, basically, for chat rooms were just like SSH for loops? So just like "Here's the files, here's the new thing"?

Mandi Walls: Yeah, everything came through as a tarball. And if you were lucky, it would be production-ready. And if you weren't, you had to open the tarball, fix a config for prod, reroll the tarball and push it back out.

Justin Garrison: So you are one of the few people in the world that know all of the tar flags.

Mandi Walls: Yeah. I know the old ones, and I use a hyphen, and [unintelligible 00:42:24.21] like "You don't need a hyphen anymore. [unintelligible 00:42:26.27]

Justin Garrison: [unintelligible 00:42:27.12] You don't need that hyphen in there. That's just a wasted character.

Mandi Walls: "I have so much muscle memory on this... What are you talking about?!"

Autumn Nash: You did not say a wasted character...

Justin Garrison: It is a wasted character! You do not need the hyphen [unintelligible 00:42:38.09]

Autumn Nash: I'm done with you... It's fine.

Justin Garrison: Those are the things that learning through that period of just like "I have to get this script right the first time, because it's going to be deployed, and I don't want to run this script again, because then I'm going to Ctrl+C it in the middle of my for loop, and I don't know which servers are good. So I have to do it all again." So it's like, you're gonna learn the tar commands... I learned regex early on from that, and it just has stuck with me... And it's one of the best things that I learned, because I'm just like "Guess what - this applies in a lot of situations." And now that [unintelligible 00:43:08.00] command is not scary. I'm fine. I'll get it on maybe the second try now, but it's just like "Oh, these things are pieces that I learned through doing it and struggling over and over again, on call."

Mandi Walls: And that was one of the great parts about working on a Unix platform, just at the foundational level. The individual tools are so neat, and you could plug them together so well... So yeah, we're able to read through machines.dat, pull things out, set an awk, and send it off to the for loop super-easily... But we had scripts that would do whatever, and once we got to Java -- so we migrated from AOL server to Tomcat in 2006, I think...

Justin Garrison: War files now. You don't get a tar, you get a war.

Mandi Walls: [43:50] You get a war file, with an XML config wrapped in it, which is a nightmare... And no good practice around making sure things are good and ready for prod. So we'd be unwrapping everything, and then rolling it back up and pushing it out, just to make sure. Because one of the interesting things about AOL at that time - it was like the only place that I've really encountered that spent a lot of money on the non-prod environments. So there was full deploys across for dev and integration testing... Because if you were going to integrate with the dial-up stuff, there's this service called Unified Preferences that held all the backend information about all the users... And if you were going to integrate with that and pulling it in, you had to load it up in the integration environment, make sure all this stuff worked. So I had all these environments and all this stuff... And we're always getting stuff to go into prod that was configured for dev, and integration, and not for where it was supposed to go, for any reason...

Justin Garrison: What sort of data behind the scenes were -- is this MySQL then?

Mandi Walls: It was. It was MySQL... And one of the unfortunate things about that era was that there wasn't then a lot of open source out of any of those. AOL server was open source, but there were so many other cool bits and pieces that AOL -- and Yahoo too, actually, at the time... That just never made it out into the world. So we had these MySQL servers, and they had this proxy software in front of them called Atomics... I don't remember what it stood for. But it basically allowed you to put HTTP calls into your database, so you could put it behind the [unintelligible 00:45:21.00] and do round robin across a set of databases that were all replicas of each other. And it made it super-easy to deal with the databases. It would have been so neat if that thing had made it out into the world for other people to use, but it never did. But that was the backend of those systems, was MySQL servers at the time... So yeah.

Autumn Nash: There probably wasn't a ton of choices for databases either, right?

Mandi Walls: No, because it was all commercial. So some of the older stuff ran on Oracle, but then the web stuff to get the kind of scale out of it, you don't want to pay for Oracle...

Justin Garrison: Oh my gosh, I did not know that this was open source. I've just found the GitHub. GitHub/aolserver/aolserver. Last commit. Oh, there's one that was two years ago, but everything else is like 20 years. 19 years ago, 21 years ago... This is amazing.

Mandi Walls: Yeah, it's classic. So the other dirty secret of AOL server was that the guys at Bitly were AOL employees, and they took AOL server with them to Bitly. So there was some AOL server behind Bitly for a long time. I think they've migrated off of it now, but... Iit was over there, too.

Justin Garrison: There's your nsconfig TCL file right there.

Mandi Walls: There you go. Yeah.

Justin Garrison: This is way back. This is amazing. I love that.

Mandi Walls: Yeah. It's all there. If you want to run it, go for it, man. Yeah.

Autumn Nash: It's also crazy, because back in the '90s and early 2000's AOL and Yahoo were so big. It's hard to imagine how it is now, where Yahoo is barely existing, and AOL is gone. It's crazy.

Justin Garrison: Which is funny, because - I mean, sheer scale... Yahoo is still infrastructure and development bigger probably now than it was then...

Mandi Walls: It's so huge.

Justin Garrison: There's just so many more people, and there's so much other things to do. These are still big things, they just aren't in the mindshare, and they aren't the common thing you really think about anymore.

Mandi Walls: Absolutely.

Autumn Nash: They were like the biggest email at the time. Yahoo and MSN... It was crazy.

Justin Garrison: And then Google came along like "One gig of free email" and everyone was like "Ah, screw that." I was deleting every old -- like 10 megs? "I don't know what to do with this."

Autumn Nash: Well, not just that, but Google has an integration to use your mail for everything. So I'm just lazy and don't want to make six different logins, and I'm just like "Sweet..."

Mandi Walls: It changed the whole landscape of that stuff.

Justin Garrison: So you left in 2011, right? And this is right around -- like, DevOps was a thing. It was becoming -- all of those lessons learned that you're talking about were definitely coming into view publicly for people, and they started talking about "Hey, how do we not throw things over the wall? How do we do this config management stuff?", all that stuff. So what was it like at the tail end of "Oh, hey, we're going this direction", or "We're going to change everything to make it better, hopefully, for the ops team or the developers"?

Mandi Walls: [48:04] Yeah, AOL wasn't headed in that direction when I left. So the Velocity Conference kind of kicked all this off. The first one of those was in 2008, and then things kind of got rolling after that, with web operations being something that was like -- you had to do it at scale, you had to think about things a little bit differently than people had been thinking about systems administration in the past... And also sort of taking into account "Yeah, you can't do this at massive scale with these tiny little teams, when you're just on the receiving end of a waterfall of garbage from the application teams." Because they're being slammed in the head for deadlines, and all this stuff...

Justin Garrison: Yeah. Features, and -- yup.

Mandi Walls: A number of places where everybody had just crazy expectations that no one was going to meet. So at the time that I left, AOL wasn't really headed in that direction yet. It was a very tumultuous time at AOL. They were ingesting the Huffington Post at that time, so that was a whole other platform they were dealing with...

Autumn Nash: I didn't even know they bought that. That's crazy.

Mandi Walls: Yeah, we were at 7 and 70 Broadway when a HuffPo got bought, and they all come in and they took two conference rooms to be Arianna's office, and she had these nice couches... It was very nice, and we're like "What's going on?" "Oh, that's Arianna's office." "Okay... Can't go in there anymore."

But I think they figured it out eventually. There's still folks over there that are running all these things. Like you said, stuff is still there. Yahoo and AOL are now owned by -- I think it's one entity. It was under Verizon for a while, and then I think they've been spun out or whatever... But they're all still doing their thing, and... I think Kara Swisher had their CEO on her show, on her "On with Kara Swisher" podcast last week or a week before, talking about what they're doing over there... Because AOL was the thing. If you were like a Midwestern housewife, home in the middle of the day, like - I know what you were doing, because you were on our systems. [laughter]

It was kind of cool. You could see things... It was so much in the zeitgeist that you could see in real life mirrored in the metrics. For things like the Super Bowl, right? And you've got two quarters of play, halftime, two more quarters of play, and then you're done. So if you're watching at the time the sports channel and the rest of the channels at AOL, you see the traffic bottom out while the game is running, pop back up as everybody's checking their news and email during the halftime show, bottom out during the play, and then come back up to normal after the show. And you could do that for any big stuff: the Emmys, the Oscars... It was crazy what you could see as a reflection of what people were actually doing in real life, because we had so much of a view on it... Because it didn't matter, it was all in the monolith, whether you were looking at sports, or news, or whether, or entertainment, or anything; it was all there, and we could see all of it.

Autumn Nash: Did you ever start recording any of that data, or learning from the traffic patterns? Because I had a friend who worked at MSN at the time, but I think after they were -- were they bought by Microsoft? Something like that.

Mandi Walls: Yeah... Hotmail was the original product there.

Autumn Nash: And they were already kind of starting to record clicks, and data, and what people were doing... And it's interesting, because I feel like people are under the impression that collecting data and learning through data is something new, but we've been doing that for forever, you know? So it's like... Did AIM do that at any point, or...?

Mandi Walls: Oh, I'm sure AIM did. And on the website, we did, too. There were tracking pixels, and other cookies, and all kinds of stuff built into all the pages, to know what folks would do, where they would click in different features, so that you'd know "Hey, they really engaged with this particular thing, but they didn't engage with this other thing... So we'll put more development on this particular module", whether it was like a horoscope, or weather, or whatever it was.

Autumn Nash: Oh my God, I used to always check my horoscope.

Mandi Walls: You had to check the horoscopes, right? I'm glad you did, because they were paying a bunch of load... So [unintelligible 00:51:49.02] and that was important. But yeah, there was all kinds of commercial products at the time that were helping us out on all that stuff... They'd spend a lot of money on that, because you want to push your resources to the things that people are going to engage with, because ultimately you're selling ads. And when you have something as big as aol.com, you make all of your money for the entire year out of ads before the end of February.

Autumn Nash: [52:14] Wow...

Mandi Walls: You're making a lot of money to run that thing, and it then gives you the capital to run everything else afterwards. So yeah, a lot of cash there.

Autumn Nash: I love just like the use of data to learn more about customers. I feel like people are almost outraged about social media in different places taking your data to learn about you, and I'm like "We've been doing that for forever."

Justin Garrison: And the data then - there was no metrics, or even like open telemetry... You're just looking at AOL logs, hit logs. Like "Hey, this is how many people are coming through." I can scrape it and pull the IP address to get some basic information, but that was the data, was an access log.

Autumn Nash: Yeah. But it's gotten progressively -- like, just look at that Target thing, where people... Target was sending people baby coupons before they knew they were pregnant. And that was like 10-15 years ago. So just progressively, the more data that people started collecting, they started using, and it's gotten more and more, I guess, accurate in some ways... But it's just interesting, there's so many different ways to use data to either sell, or learn more, or to improve your product, and it's crazy that we've been doing it for so long and it just keeps progressing.

Justin Garrison: And you wonder why people want to run their own servers... [laughs]

Autumn Nash: I love that though. I remember I got to meet one of the ladies who did Alexa Shopping or something at Grace Hopper, and I was like "Yo, you keep reminding me to buy more popcorn. I love you." I mean, to a certain extent, right? You don't want people to have sensitive data... But we use that every day. Facebook's like "Do you want this new pair of Converse?" I'm like "Actually, I do... They're really cute."

Mandi Walls: And it's been an interesting evolution, because like you say, we really only had access to whatever tracking pixels they put on the page, and that went to the product managers; and then on the operations side, all you really got is the access logs. So you can see regionally who's coming in, where they're from... Do we need to then break things out, so we're closer to those folks, so you can make operational decisions? But then - yeah, you can see "Hey, for whatever reason, today no one's engaging with this particular channel. What's going on over there?" and they can look at that sort of impact of an editorial decision, or what kind of features they've published today that people aren't engaging with. And it's interesting in that it's real time without being participatory. The users aren't giving you more than what they're looking at. Like, there's no data coming in from the user, like there is with social media. So they're not telling us "Oh, hey, I went to the park today with my friend." They're just clicking on whatever information.

Justin Garrison: You see the actions, [unintelligible 00:54:49.29]

Autumn Nash: Yeah, just reactions.

Justin Garrison: How did that apply to scaling? Because you mentioned during Super Bowls, and those things, the chat rooms would get really busy... How did you handle that on the backend, especially being on prem, and having hardware that you have to buy? Like, "I need to scale this thing up", and you have to make some operational decision. That's like a six-month process.

Mandi Walls: You overbuilt Everything was overbuilt. Absolutely overbuilt. And when we had something like dot com, or the channels, or whatever, that had to be in multiple locations for DR, or whatever, you made sure that every location could handle all of the peak traffic at your anticipated peak. And we can look back -- one of the other things that AOL never got to open source was their monitoring system. And it had a -- I forget its name; it was weird. There were like tuna boats or something involved. It was strange...

Autumn Nash: Wait, tuna boats?

Mandi Walls: Yeah, that was part of the transports. I don't remember all the details there... But it was aggressive. It was really, really good, and gave us a lot of information about when you were hitting peak. And we could put custom data into it, and a bunch of other really interesting things that were unique at the time... And again, it would have been interesting to see if it had gotten open source, what people would do with it. But it would give you enough to know "Hey, you need to bulk this thing up", because like you said, there was no dynamic provisioning. It was all solid-built bare metal at that time. Everything has to be fully deployed.

Justin Garrison: [56:17] You get the page and then you dust off those extra machines you had in the back and you're like "Hey, these are gonna be web tier now."

Mandi Walls: Right? If you need to redeploy, then you need to pull in the extra machines you squirreled away in some other project, and reprovision them, and you might have to reload their operating system because they're on the last version... And then put on the runtimes, and the current code, and hook them all in, put them in machines.dat and off they go... So they get all the hookups... Because in the backend there was a custom CDN, and small object brokers, and repeaters for commands, and all kinds of other weird stuff they had to talk to for that particular platform. So yeah, there was no quick scale-up, so we were overbuilt all the time, on all those platforms.

Autumn Nash: Again, I'm so grateful for the cloud...

Mandi Walls: Absolutely.

Autumn Nash: Also, I just feel like that is amazing. You lived in a really exciting time, even though I feel like it must have been very hard at the moment... But, I mean, you've got street cred, Mandi.

Mandi Walls: I mean, we learned a lot of stuff. We learned that push sucks, to deploy stuff, so you want to pull-based deployment as much as you can... Because you don't know what's down out there. You've got 1000 machines; at any given time one or two were probably offline, taking a nap, doing something... All that stuff. And deployment being ready for prod - that was a huge thing to try and teach engineers to think about, because they don't know anything about prod. They don't know what prod looks like.

Justin Garrison: Well, they were just learning how to code, right?

Mandi Walls: Exactly.

Justin Garrison: This is the early 2000s. It's just like "I don't know... Just learn the language and bang out some characters, and ship it."

Mandi Walls: They're busy trying to figure out TCL, man. They've got no idea.

Autumn Nash: Is there anything you miss about those times, being an engineer in those times, compared to now?

Mandi Walls: No... We eventually had a pretty good relationship with the engineering team. And I feel like if you're in certain kinds of DevOps or SRE type deployments, you might not have as good a relationship across lots of engineering teams that we had... But that took work. That was hard to try and persuade people to come to the table with us and talk about "Hey, we want your stuff to succeed. We're not here to turn your stuff back and make you go back to the drawing board. We want to be able to deploy your cool stuff into prod, but you need to work with us on this."

So we eventually had pretty good relationships with most of the engineering teams on the content side... That I hope other folks have. You hope that you have a nice, mutually beneficial relationship with all the people that you're working with. But the other stuff, like putting tickets in, and requisitioning storage, and dealing with all that nonsense - absolutely not. Overbuilding and wasting so much power and energy for some of that stuff, to have that running?

Autumn Nash: That's crazy... So much money...

Mandi Walls: Yeah, all the cash that went into it... It was of its time, and I like the cloud much better.

Autumn Nash: That's wild to me, that people don't want to have a good relationship with the engineering team and the SRE team. You need them. That's like when people talk crap about QA, and I'm like "You'd better be nice to those people."

Mandi Walls: It's all symbiotic. You all rely on each other.

Autumn Nash: Not just that, but we're all in the same struggle.

Mandi Walls: Yeah. Everybody gets paid out of the same success, right? You all got to do it. So...

Autumn Nash: Like, your life's gonna suck if their life sucks, so why don't you just work together...? That's crazy.

Justin Garrison: Mandi, this has been great. Thank you so much for coming on and talking to us about --

Mandi Walls: It was super-fun. Absolutely.

Justin Garrison: Where can people find you online? If they want reach out and say "Hey, by the way, my AOL is still down..."

Mandi Walls: Oh, yeah, I can't help you there... [laughter] Most of the time these days I'm on Bluesky. So I'm lnxchk on Bluesky. You can also find me on LinkedIn, just as /in/mandiwalls. And I'm in the HangOps chat, if folks out there on HangOps are hanging out in there, on HangOps...

Justin Garrison: I forgot about HangOps. I used to do HangOps all the time. That was great. Yeah.

Mandi Walls: Yeah, HangOps is still a busy Slack.

Autumn Nash: Is that a Discord? Where's HagOps?

Mandi Walls: That's a Slack.

Autumn Nash: Oh, there's a Slack...?!

Mandi Walls: Yeah, come join us on HangOps.

Justin Garrison: Yup. The HangOps Slack.

Autumn Nash: Okay, I have to go join that now.

Justin Garrison: Thank you so much, Mandi.

Mandi Walls: Alright, thanks so much.

Break: [01:00:22.02]

Justin Garrison: Thank you so much, Mandi, for coming on the show. We would love to talk to you again in the future about a lot of other things... Hopefully everyone enjoyed that. Also, if anyone out there, listening - if you used to run infrastructure, especially '90s, early 2000s, we would love to talk to you for more of these retro episodes. We've got at least one more lined up... And I love talking about this stuff, just because it was so different, and people forget what it was like.

Autumn Nash: It's talking about your childhood.

Justin Garrison: Yeah. I mean, there's some nostalgia to it, and then there's some of just like "I don't want to ever do that again." So email us, shipit [at] changelog.

Autumn Nash: It's also really cool to see how far things have come. The industry has really kind of gone through an evolution. It's amazing.

Justin Garrison: Things have changed a lot in the last 20 years, and I wonder what the next 20 will look like.

Autumn Nash: Well, it's interesting, with all the use of AI, and all the things that people are -- you know, the different infrastructure that people use. I feel like when I first got to into tech, CI/CD and green/blue pipelines were the new cool thing, and now they're like the old thing. All of a sudden I'm like --

Justin Garrison: If you're not doing that... Yeah.

Autumn Nash: Yeah. I'm like "Whoa... How did we get here?"

Justin Garrison: Yeah. So I like the looking back and just seeing how things were... So feel free to reach out if anyone else wants to talk about it.

Autumn Nash: Also, can we have any excuse to talk to Mandi again? We should just make stuff up to talk to her again.

Justin Garrison: She also has a podcast, so people should -- I'm gonna drop that in the show notes too, because people should go check that out. It's part of the Pager Duty podcast; they have a lot of different hosts, but...

Autumn Nash: I want to listen to it. Me and Mandi have to be besties after this.

Justin Garrison: So for today's outro I have a fun game that we're going to play again...

Autumn Nash: I'm slightly scared.

Justin Garrison: Yeah, you might want to be. This one, I don't have a -- there's no good name for it... So it's just like an acronym --

Autumn Nash: I'm really sad that you don't have an acronym for this.

Justin Garrison: Well, it is an acronym, but I couldn't make it spell something. So the letters are JDCO, but that didn't spell anything, so I was like DOJC? I don't know. So we're just gonna go with whatever we want.

Autumn Nash: I'm disappointed that this isn't some weird name, but okay.

Justin Garrison: Yeah. So it stands for Java, Data, Cloud or Other. And those are your multiple-choice questions here for projects we're going to talk about. And all of these projects are part of the Apache Foundation. And so I was scrolling through the Apache Foundation, and I was just like "They have a lot of projects." Almost all of them have to do with either Java, data, cloud, or other.

Autumn Nash: All the things I love.

Justin Garrison: So I was like "This might be a good one", Autumn. So there's some that might be kind of obvious, and you're gonna pick one of Java, Data, Cloud or Other. So Apache Cassandra... Which category does that fall under?

Autumn Nash: It's a database, but it's also built in Java, and it's got an Apache license.

Justin Garrison: Right. And all of these would be like Apache, Name, something, and they all have Apache licenses. The vast majority of them are written in Java... And so this one would include databases and data processing of some sort. So Hadoop is another one... Hadoop is a data processing. So --

Autumn Nash: Also a lot of streaming in different ways, stuff like that.

Justin Garrison: Yeah, there's a lot of that in here, and this is why I wanted to talk about it and see what we think they are.

Autumn Nash: I'm terrible at remembering names though, so...

Justin Garrison: Most of these I did not even remember what they did. I knew the names of them, and I'm just like "Where would that fit, if I was guessing this?" So this is kind of for the audience to learn a little bit about just what projects exist, and kind of where they fall. So CouchDB... That's another one that you probably know.

Autumn Nash: That's a database.

Justin Garrison: And I didn't even know that was an Apache project. I honestly did not know.

Autumn Nash: Yeah. It was weird, at Google Next I saw a whole car wrapped in CouchDB stickers. It was very interesting.

Justin Garrison: That's one way to use those conference stickers.

Autumn Nash: They also gave me popcorn and a donut, so I'm totally their friend now, because there was a donut involved.

Justin Garrison: How about how about Apache Ant?

Autumn Nash: [01:06:11.05] That is, I'm pretty sure, a Java... Nah, is that a framework?

Justin Garrison: It's a build tool. It's a Java build tool. So yeah, it's a Java thing... So yeah. Let's see, how about CloudStack?

Autumn Nash: I'm gonna go with cloud.

Justin Garrison: Yes, it's definitely a cloud -- it's like the Apache version of OpenStack, in many ways.

Autumn Nash: Oh...

Justin Garrison: It does a lot of that self-hosted --

Autumn Nash: I didn't know they had an Apache version of that. That's interesting.

Justin Garrison: Let's go with Flink.

Autumn Nash: Data?

Justin Garrison: It is a data. It is a data processing engine. And so it's kind of like a -- I think it was stream processing, if I remember correctly... Guacamole?

Autumn Nash: Guacamole... Java.

Justin Garrison: This one would fall under cloud.

Autumn Nash: Oh, interesting.

Justin Garrison: It's an html5 Remote Desktop protocol.

Autumn Nash: That's a cool name. Is there a Salsa that goes with it? Is there Chips that go with it? Can you imagine if they had different parts, and one was like Salsa [unintelligible 01:07:05.02]

Justin Garrison: You're just building out a menu here.

Autumn Nash: Yes!

Justin Garrison: This is like "Okay, we're gonna go to the Mexican restaurant..."

Autumn Nash: Now I'm hungry, darn it...

Justin Garrison: I used to manage a Virtual Desktop Environment. Guacamole wasn't part of that, but I knew of Guacamole a long time ago, and I'm like "Oh, this thing is cool. It does Remote Desktop through a browser", because [unintelligible 01:07:22.11]

Autumn Nash: Oh, that is cool.

Justin Garrison: Yeah. And it's all html5.

Autumn Nash: I might go check that out.

Justin Garrison: How about log4j?

Autumn Nash: Oh... Java.

Justin Garrison: This falls in the CVE category... [laughter] Okay, Apache Brooklyn.

Autumn Nash: Ooh. Brooklyn. Interesting. Um... Cloud?

Justin Garrison: This one I'm gonna put under Other... It is a framework for modeling, monitoring...

Autumn Nash: You didn't tell me Other was an option.

Justin Garrison: That's the O.

Autumn Nash: Oh. I feel like you just said Java, Database and Cloud.

Justin Garrison: And there was an Other. Because I have a couple in here that are Other. It is a framework for modeling, monitoring and managing applications through automatic blueprints. So it's making blueprints and then stamping out these applications. And I don't know where it's used, I don't know who uses that... If anyone knows, let me know.

Autumn Nash: I've never heard of that.

Justin Garrison: Yeah, I didn't hear of this one. This was kind of a fun, like "What? What does that do?" Flume.

Autumn Nash: I think Flume is data, isn't it?

Justin Garrison: It is data. It's a log aggregator.

Autumn Nash: Yeah.

Justin Garrison: VCL.

Autumn Nash: VCL. Java? I've never heard of this before. I'm guessing.

Justin Garrison: VCL is cloud. It's another VDI/cloud connection environment. This is specifically for, I think, managing the infrastructure side of it.

Autumn Nash: Interesting.

Justin Garrison: But yeah, I think Guacamole is a component in there, but they have this larger platform. More like a Zen desktop.

Autumn Nash: That's a missed opportunity. They should have named it Salsa, obviously.

Justin Garrison: Probably, yeah. This one made me mad...

Autumn Nash: Oh, no...!

Justin Garrison: This one's called Yunikorn.

Autumn Nash: I don't know, but it better be fabulous, because they named it Yunikorn. It better not suck. I don't know. How

old is it?

Justin Garrison: It's pretty new. It's new-er...

Autumn Nash: Cloud?

Justin Garrison: I would put this under the cloud category. It's a scheduler for Kubernetes. The description was "Standalone Resource scheduler responsible for scheduling --"

Autumn Nash: Kubernetes gets all the cute stuff. Kubernetes and Salesforce gets all the cute stuff. There's never cute stuff for Java. It's so annoying. That's it, I'm gonna start -- and then what's that new Kubernetes adorable thingy, and it's like all cute? I'm gonna start reading Kubernetes, dang it.

Justin Garrison: Phippy? Do you mean Phippy, the characters?

Autumn Nash: No. The new one, that they just released. It's like [unintelligible 01:09:38.14]

Justin Garrison: Oh, [unintelligible 01:09:39.05] I can't say it. But cute, yeah.

Autumn Nash: They always get all the cute stuff. You have all the funnest developer advocates... JavaScript and Kubernetes get all the good stuff, and it's not fair.

Justin Garrison: You can just join the club. It's fun.

Autumn Nash: Dang it. Now I've gotta go learn how to run Kubernetes...

Justin Garrison: But Yunikorn is mainly -- they say it's scheduling batch jobs, long-running services and large-scale distributed systems. And I'm like "That's pretty much all of the things." So I don't know how the difference is... But on their website it also said it focuses mostly on ML stuff. So I think it's ML/batch...

Autumn Nash: Interesting.

Justin Garrison: Yeah. Pig. Apache Pig.

Autumn Nash: I don't know... Data?

Justin Garrison: Yup. Good job. A platform for analyzing large datasets on Hadoop.

Autumn Nash: Interesting. I don't know the -- like Hadoop is like elephants? Or pile Pig -- I don't know. Roller.

Autumn Nash: Cloud?

Justin Garrison: This is Other... It's a blog platform, all written in Java, and it ties into Maven. I've never heard of it before in my life, and I was like "Alright, fine..." And the very last one, let's go with -- this one's called Nuttx.

Autumn Nash: What?! [laughter]

Justin Garrison: Someone lost the naming battle on that one. Nuttx.

Autumn Nash: Why do you set me up for these things?! Data...? I don't know...

Justin Garrison: It's a real-time operating system. It's a Linux operating system for real-time embedded systems. So it's in Other. But yeah, I've never heard of before in my life.

Autumn Nash: You can tell a dude named this. Like, why?

Justin Garrison: Like "Nuttx." Okay, there we go. And now it's an Apache project.

So thank you everyone for listening to this episode. Thanks again, Autumn and Mandi, for coming on and joining and talking about AOL chat rooms... And we will talk to you all next week.

Autumn Nash: See you, guys.