The final day of SCaLE 20x was bittersweet, as I was eager to see more presentations but not ready for it to be over.
The opening keynote was given by Dr. Kitty Yeung. Dr. Yeung is one of those amazing people who makes me feel completely inadequate. A graduate of Cambridge and Harvard, she has worked in fields as varied as fashion and quantum computing. She is also an artist, and most of her slides were ones she created herself.
A lot of her current work centers on the intersection of technology and fashion. Now I am the least fashionable person alive. Seriously, when I’m not in front of customers I wear the same clothes every day: a black, heavy-weight pocket T-shirt and Levis blue jeans.
I have often thought if I ever did start another company one option would be to create modern tech for older people. Now some people may say that products from companies like Apple are easy to use, but as someone who is often around people in their 80s I know this isn’t true for them. There should be a market for very simple, but powerful, tools aimed at people in this age group. I keep thinking of the Yayagram machine I saw a few years ago as an example.
Dr. Yeung’s work on integrating tech and fashion could be a great interface for these products.
Shifting gears a bit, the next presentation I attended was by Don Marti on privacy.
While it is hard for an individual to balance privacy and convenience in today’s surveillance economy, there are some steps you can take to minimize what personal information you share. I take a number of steps to increase my privacy while on the Internet and this talk gave me a few more tools to use.
One of the things I love about SCaLE is that they usually have an amazing closing keynote. It is cool because you get to end the conference on a high note, and as a speaker it is always nice to have something to keep people from leaving early on the last day.
This year’s keynote was no exception and featured Ken Thompson, one of the founders of Unix and the creator of the Go programming language.
Before he spoke, Ilan Rabinovich gave some closing remarks reflecting on 20 years of SCaLE (which I learned started out as an umbrella conference for Southern California area Linux User Groups).
You can see a much younger Ilan as well as the still very tall Gareth Greenaway in that picture from SCaLE 1x. As someone who as been working in open source for over two decades it just doesn’t feel that long to me, so it was cool to reflect on all that has happened.
Two decades pales in comparison to the experience of Ken Thompson. He was hired by Bell Labs the year I was born.
He gave us some of the history of his time there and walked us through the creation of what was probably the ur-archive of digital music. In the before times, back when mp3 encoding came out and people worked in offices, some of us would bring in our compact disc collections, rip them and place them in a common archive. Ken’s project pre-dated mp3s and started out as a quest to collect all the Billboard hit songs from 1957. As someone with mild OCD issues, I felt seen when he talked about how that expanded to collecting all the songs (grin).
Of course, digital content isn’t useful unless you can access it, so he modified a Wurlitzer jukebox with a couple of iPads to provide a cool interface, and then, because he is awesome, he bought a refurbished player piano with a MIDI interface so you could trigger that from the same device.
So the best way to sum up Sunday at SCaLE is that you are a lazy bum compared to folks like Dr. Leung, Ilan and his team, and Ken Thompson, who apparently thinks about making a space shuttle out of discarded household appliances while you are watching re-runs of The Big Bang Theory.
Hats off to the whole SCaLE team for another great conference, and I’m so happy that it was back in Pasadena. I am already looking forward to next year.
I got up fairly early on Saturday and went through my presentation one final time. When working on a new talk there is a point where the feeling I get when thinking about having to present it goes from anxiety to eagerness and that happened this morning, so I felt ready to go.
The conference started off with a keynote by Arun Gupta, who is a VP at Intel focused on open ecosystems.
His talk was about using open source cultural best practices within an organization, and he used specific examples of how that was being done at Intel. It was the first time I had seen the abbreviation “CW*2” which stands for that Zen quote about “Chop wood, carry water“. While that phrase has a lot of different meanings, when applied to open source it references the idea that as a member of an open source community one should not only just focus on the high profile aspects of the project but also the more mundane ones that actually keep the project alive.
After the keynote it was time for my presentation. I was originally scheduled to speak on Sunday morning but due to a conflict I got a spot on Saturday. I was grateful as I like to get my responsibilities out of the way so I can enjoy the rest of the weekend without worrying about them.
I did a talk on open source business models and how things have changed in the past decade or so. My “hook” was to do the presentation in the format of an old school text adventure.
It was fun (and yes, there was a grue reference). It seemed to go over well with the audience and there were a number of great questions afterward.
With that over I decided to walk down the road to grab lunch when I ran into Gareth Greenaway. Gareth was one of the original organizers of SCaLE and it was cool to be able to catch up. He is currently doing some amazing things over at Salt.
SCaLE always has a wonderful hallway track and I also got to see John Willis. I had not seen him in years although we used to cross paths much more frequently and it was nice to be able to catch up. He is a co-author on a new book called “Investments Unlimited” which chronicles the DevOps journey of a financial institution.
I also had some time to wander around the Expo floor. I try to minimize the amount of swag I bring home but I’ve started to collect those little enamel pins that some people give out.
Tha AlmaLinux pin was given to me by the amazing benny Vasquez who was spreading the word about their project which helps fill in the gap left by the CentOS project migrating to CentOS Stream.
This year I spent a lot more time in sessions than I normally do as they were just so good. Many times I found myself having to decide between three or more talks that occurred at the same time.
One that I didn’t want to miss was given by Zoe Steinkamp on using InfluxDB to monitor the health of plants.
I spent much of my professional career in observability and monitoring so I have a soft spot for unique applications of the technology. Zoe uses sensors to feed information about humidity, sunlight, etc. from her houseplants into InfluxDB so that she can use that information to maintain them in the best of health. My spouse keeps koi and I do something similar to monitor water temperature.
The next presentation I attended was on the Fediverse. Now I have never been much of a social media person, and last year I deleted my Twitter account which left LinkedIn as my only mainstream service. I do have a Mastodon account and with the recent migration of a lot of people to the platform I do find it useful, although I don’t spend nearly as much time on it as I did Twitter. I think it has a lot of potential, however, and what it really needs is that killer app to make it easier to use.
Bob Murphy did a great talk on how the Fediverse is not Mastodon, and he introduced me to a number of other services that use ActivityPub, which is the underlying protocol. For example, there are sites that focus on image as well as video sharing, not just microblogging. Speaking of blogging, Automattic (the company behind WordPress) announced that they acquired the makers of an ActivityPub plugin to bring the technology in-house and it seems like they plan to make it a core part of their app.
The final talk I attended was given by Michael Coté. I’ve known Coté for over two decades back when he lived in Texas and it was nice to see him again (he’s living over in Yurrip these days).
As usual, he provided some great insights on what he is calling “platform engineering” (think DevOps mashed up with SRE).
After the talks were over I met up with some friends for dinner. Now I am a fan of the television series The Big Bang Theory. It is set at Caltech which is located in Pasadena, and there is even a street named “The Big Bang Theory Way” (my picture of the street sign didn’t come out, unfortunately). During the weekend I kept hearing people talk about a place called “Lucky Baldwins”. I thought it was a joke since the character of Sheldon in the TV show makes a reference to the place in an episode called “The Irish Pub Formulation” but it turns out it exists.
We stopped there for a drink and ended up staying for dinner. It was a nice ending to a busy day.
I spent Friday morning practicing and working on my presentation, but managed to make it over to the conference just before lunch.
I was really impressed with the “steampunk” graphics for this year’s show. They were cool.
Check-in, as usual with SCaLE, was a breeze. They have automated most of it. You walk up to a bank of computers, choose one and then enter in your registration information and your badge gets printed. I also think you could purchase a registration through the system as well.
Then you walk down to a table to get your conference bag, badge holder and lanyard.
After wandering around for a bit I went down the street to meet up with Aaron Leung. While I love many things about being able to work remotely, I do miss meeting people in person and especially people I work with at AWS. Aaron happens to live in LA and he was kind enough to come out to see me and we had a great lunch.
Having SCaLE back in Pasadena was awesome. Not only is the convention center nice, it is really close to a ton of restaurants so you have a bunch of options for dining. The only downside was that it was raining (you can see the folks with the umbrellas above). When I had to go outside it wasn’t bad – more of a mist – and it was strange to have rain in LA. It did make the hills very green, however, and quite the departure from the usual tan.
After our long lunch I worked some more on my presentation, and then headed back over to the conference. The Expo floor was open so I spent some time wandering around and looking at the booths.
The “forgotten” operator in the title refers to people tasked with running on-premises data centers. Now I’ve been in a number of data centers and they were all has he described: racks upon racks of 1U and 2U servers arranged in rows, some with “hot” aisles and “cold” aisle and each server with a pair of power supplies and lots and lots of cabling.
I have never been inside a Google or Amazon data center, but I’ve always imagined it to be more along the lines of the one Javier Bardem’s character set up in Skyfall.
In these days of the “cloud”, compute is divorced from storage and so a lot of the hardware in an old school 1U rack mount machine is unnecessary. Plus there is the antiquated idea of having separate power supplies for each board in the rack. Computers run on DC power, so why not just supply it directly from a central source vs. individually? I started my professional career working for phone companies and everything was DC (many central offices had a huge room in the basement with chemical batteries – and, yes, it did smell).
When I started my own company 20+ years ago I had two Supermicro 1U machines and when I turned them on they were each louder than a vacuum cleaner. Bryan told us that their racks are whisper-quiet (well, once they are powered on and the fans on the rectifiers spool down).
I’m oversimplifying, but that is the basic idea behind Oxide. They want to supply cloud-grade computing gear to enterprises and break the old paradigm of what a data center should look like. Users can still leverage cloud technologies like Kubernetes but on their own gear. It still doesn’t solve the need to have people who understand the technology on staff, but it was exciting in any case.
Lightning talks are 5 minute presentations consisting of a set number of slides that advance automatically. I’ve never given one, and once when I mentioned that I thought it was cool it was pointed out that I can’t introduce myself in five minutes, much less give a talk. (grin)
I was impressed with the presentations. One that stuck out was the fact that the term “open source” as formalized by the Open Source Initiative is now 25 years old. Wow.
After Upscale a group of us went down the street for dinner and drinks. I can’t emphasize enough about how much I miss the face-to-face aspect of in-person conferences and I hope we can continue to have them safely.
Today I left for Pasadena and the 20th iteration of the Southern California Linux Expo.
SCaLE is on of my favorite events of the year, and I’ve been coming (for the most part) since SCaLE 5x.
This year I’m giving a presentation on open source business models, and I’m pretty happy with how it turned out.
I didn’t get to attend any of the sessions or activities on the first day, but I did manage to have dinner with some friends including Ilan Rabinovich, who is one of the main organizers of the event, and Stephen Walli, who works on the open source team at Microsoft. I also got to meet for the first time Amye Scavarda Perrin who is a program manager at the Cloud Native Computing Foundation.
Ilan, Stephen, myself and Amye
While I think virtual conferences have a lot to offer in the way of education, I really do miss these opportunities to meet face to face and to interact with interesting people. I’m hoping that in-person events become more common in 2023.
Not to misquote the Beatles, but it was 20 years ago today that I posted my first entry to this blog.
By 2003 blogs were pretty popular so I was somewhat late to the game. My friend Ben Reed had a blog that he used kind of like a proto-Twitter where he would post many times during the day on what he was doing, which at the time focused on porting KDE to MacOS. Back then a lot of open source projects used blogs as a communication platform and since I was maintaining an open source project I figured I should start one. He used Moveable Type as his blogging software so I did as well.
Moveable Type was very popular back then, but when they started to move their licensing to a more proprietary model, people were turned off and migrated to WordPress. I find it delightfully ironic that WordPress, which is open source, now forms the basis for around 40% of all websites whereas people have probably never heard of Moveable Type these days.
If there happen to be any younger readers here, blogs twenty years ago were like podcasts today: practically everyone had one. Also like podcasts, most were sporadically updated, which is why Really Simple Syndication (RSS) became important. RSS is a protocol that lets you find out when websites are updated. Using a “news reader” like Google Reader, you could aggregate all the websites you were interested in following into one application. It was pretty cool.
But then along came social media sites and what people used to post on blogs they started posting there instead of on their own sites. Even with a lot of hosting options, running a blog is incrementally harder than posting to, say, Facebook. In 2013 Google killed Reader which pretty much ended blogging (although I still use RSS and find that the open source Nextcloud News is a great Reader replacement).
But I’m old and stubborn so I kept blogging. In fact I think I have something like five or six blogs that I update periodically. I use another blogging technology called a “planet” to aggregate all of those blogs so my three readers can easily keep up with what I’m doing.
Another thing that social media brought about was this idea of engagement. People still look at metrics such as number of followers as an indication of how far a particular post reached, and even when I started this thing folks would brag about their stats. As a contrarian I took the opposite approach and decided that I’d be happy if just three people read my posts. I got a chuckle the first time someone came up to me and said “hey, I’m one of your three readers”. Made the whole thing much more personal.
And to me blogging is personal. I love to write and the best way to become a better writer is to do it. A lot. I really wish I had more time to post but between my job (which involves a lot of writing) and the farm it is hard to find the time. As someone who loves the culture around open source software, sharing is key and I hope some of the stuff I’ve posted here has helped someone else as so many other blogs have helped me.
That’s about it for this update. I would promise that I’ll post more often and with better content in the future but I don’t like to lie (grin), and in any case thanks for reading.
While I love living “out in the country” I often envy my urban friends for their network connectivity. When I moved out to the farm in 1999 the only “high speed” access was satellite, and even that required a modem and a phone line. I was overjoyed when Embarq finally deployed DSL to my house, and while 5bps down might not seem like much these days it seemed heaven-sent back then.
Jump forward 20 years and Embarq became Centurylink which is now Brightspeed. I had a pretty poor opinion of Centurylink (or as I called them CenturyStink) but high hopes for Brightspeed when they bought Centurylink’s ILEC business in our area, but they have been disappointing. Here is that story.
Both my wife and I work from home, and when our DSL circuit is working it works well enough for us to get our jobs done. At 11Mps down and 640kbps up it doesn’t even qualify as “broadband” but it is a trade off I’m willing to make in order to live where I live.
Starting back in early November we began to have issues with the quality of the DSL connection. Quality issues are always frustrating since the support technicians at the provider never seem to have the tools to properly measure it. Instead they just tell me the circuit is “up” so I should be satisfied, even though I tell them that while it is up, it is unusable.
The issue was high latency and packet loss. Latency is a measure of the time it takes information to travel through the network and packet loss indicates that some of that information never makes it to its destination. The protocols used in networking will automatically deal with packet loss by sending the information again, but the more this happens the worse the experience is for the user. Things that can handle packet loss gracefully like e-mail, web pages and chat just seem very slow, while anything that requires a more steady flow of information like video or gaming just don’t work at all.
Having done network monitoring for much of my professional life, I monitor the quality of my DSL circuit by attempting to reach the 188.8.131.52 IP address, which is a highly available DNS server run by Google. Here is a recent graph:
Now normally the graph should be green and pretty much focused around 45ms. This one was all over the place. I asked my neighbor to execute a ping to the same IP address and her connection was working fine, so I assumed it was an issue specific to me.
Trying to get support from Brightspeed was very frustrating. As I mentioned above they just tend to tell me everything is okay. I even reached out to one of my LinkedIn contacts who is an executive vice president at Brightspeed for help, and I think he was responsible for a ticket being opened on November 7th in the “BRSPD Customer Advocacy (Execs)” queue. I really appreciated the effort but it didn’t help with getting my issue resolved.
Just after Thanksgiving I called again and I was told that the problem was that my modem was wrong and doesn’t work with the DSL circuit we have, even though it was the model they sent to me and it had been working fine up until November. In any case they said they would send me a replacement and it would arrive in two days.
Ten days later I call back and ask, hey, where is my modem? They told me it was still “in process” but since I’ve been waiting so long they’ll overnight one to me. It shows up the next day and of course doesn’t fix the issue, but it has newer software than my other modem and reports the status of the DSL circuit. On the web page of the device this is usually represented by the word POOR in dark red, but sometimes it would improve to MARGINAL in a slightly lighter red. I call and explain this to Brightspeed, and after dealing with this for over two months they agree to send out a technician in two days.
When I finally get the e-mail confirming the appointment, it is for the following Thursday, January 5th, eight days away. That also happens to be when we are closing on a new house, so I can’t be here to meet with them. For some appointments they show up early, so I didn’t change it right away, but when they hadn’t shown up by Tuesday I decided to reschedule it.
I went to the e-mail and clicked on the link to reschedule and got sent to the Centurylink site. Of course they wanted me to confirm my account, but nothing I typed in: phone number, e-mail, or account number, worked because I no longer have a Centurylink account. (sigh)
[Note: it looks like this has been corrected, finally, on the Brightspeed website. Not sure about links in e-mails]
In the process I did find out that my Brightspeed account number ends in “666” so perhaps that is indicative of something.
I eventually ended up calling support once again. I believe it would take the average caller about six minutes to reach a human through their system, as it prompts you for a variety of things before allowing you to speak to a person, but I had been calling for over two months so I can speedrun the thing in about four and half minutes by pressing buttons before the prompt is finished.
The person I talked to about rescheduling the appointment kept me on hold for about 30 minutes before telling me that the whole dispatch system for technicians was down and that she would call me back in two hours. She never did.
The next day I made one more attempt to reschedule the appointment, but was told that the next available appointment was so far out in the future that I should just keep it, since the technician won’t need to enter the house. I left a long letter taped next to the demarcation box on my house with a detailed description of the problem, and hoped for the best.
Unfortunately, they sent out Brandon. To my knowledge there are only two technicians assigned to our rather large county: Brandon and Elton. I much prefer working with Elton since Brandon doesn’t really seem to be the kind of person who does a deep dive into the problem, but I recently learned that Elton has moved into the back office and wasn’t doing service calls anymore.
As I feared, Brandon marked the issue as closed without fixing it. Once again into the support phone queue, where I was told that he had run a test “for five minutes” and my circuit was fine. (sigh)
I did get a text asking if my problem was resolved, to which I said “NO!” and I was later contacted by a person from Brightspeed to follow up. After a very long conversation she offered to send someone else out, and that person arrived yesterday.
Philip, who is based out of Wake County (one county to the east of us) showed up promptly at 8am and within ten minutes had diagnosed a grounding issue with the wires coming to our house. In about 45 minutes he had repaired it, but he warned me that there was also an outage in the area which would explain my now 900ms ping times (but no packet loss). I trusted him that it would eventually resolve and about 30 minutes later things were much, much better.
You can see where the network was bad before Philip showed up, the gap where he was working on the system, and then the return to a more expected quality of service.
It still isn’t perfect. I’m seeing a lot of jitter from time to time which is indicated by the spikes, but for the most part the user experience is fine. I was able to participate in our departmental weekly video call without issue yesterday for the first time in months.
And that’s was really bothers me the most. For nearly three months Brightspeed was gaslighting me that my service was fine, when, as most IT professionals would expect, it turned out to be a physical layer problem. In retrospect it makes sense since we’ve been having an especially wet winter and that would have caused the grounding issue to be amplified.
I figure I spent between 40 and 60 hours actively involved in getting this addressed, and that is time I’ll never get back.
Of course it could be worse. The local newspaper published a story about a community in Chatham County that was without service from Brightspeed for a total of 51 days. At least our connection was usable enough that it only required a few trips to the public library for access during important deadlines.
There is some good news in that same newspaper issue that some attempts are being made to help those of us in rural areas get broadband. Some of you may be thinking Starlink, but I was on their waiting list for two and a half years without getting my equipment and when they pushed it out to late 2023 I just gave up and asked for my deposit back.
I am not a huge “we need to regulate everything” kind of guy, but broadband has become one of those services that is so important and that the free market has failed to provide that I would welcome government involvement in getting this issue addressed. But so far the communications lobby has been strong enough to prevent any kind of oversight, so I won’t hold my breath.
Ever since my parents got older, I’ve be wanting to create a tech company focused solely on making technology available for the elderly in a fashion that is easier for them to understand. Perhaps this will all go away with AI and digital assistants, or when my generation that grew up with tech gets older, but I have watched them struggle sometimes with mobile phones and TV remotes, even ones that are supposed to be simple, and I realize there is probably a market for such solutions.
While I don’t have capital for such an undertaking, I do have access to open source software and hardware, and I have a device idea that shouldn’t require heroic effort to create.
Remember the “Easy Button” from Staples?
What I want is something about the same size, but when you press it, it will send a notice to an app on my phone.
My mother died last year and we recently bought a new home in part because it has a basement apartment where my father can live. He will have his own space but we’ll be close enough to help him out if he needs it.
The idea for this button came to me when I was thinking about what would happen if he needed some help but for whatever reason he couldn’t either call out so that we could hear him or get to a phone. Unless he was severely incapacitated he should be able to press a big button, and since I almost always have my phone with me all I would need would be an app that could send me a notice.
Since my three readers are very smart (and not to mention devilishly attractive) you have probably thought about existing services (remember the “I’ve fallen and I can’t get up” Life Call ad from 15 years ago?) but older people can be extremely proud and they hate being reminded of their age. I’m pretty certain my father would resist carrying around a device on a lanyard but he wouldn’t mind having a button nearby “just in case”.
The feature set would be pretty short:
A big button (‘natch)
Some indication that the button has been pressed (light or buzzer)
A way to name the button
Configure the Wi-Fi connection
Configure a list of users to contact when the button is pressed
For ease of use the first generation of such a device would not be battery powered but if it was there would need to be a way to make sure the battery was charged.
While I have worked with Raspberry Pi boards I have not done anything with the Pi Zero, but I assume this would be a perfect application for it. The original Easy button could be repurposed for this device but there are also a ton of options on Amazon that could work as well.
A bit harder would be the app software, as I am lead to believe getting notices in the background on mobile devices can be tricky and I don’t want to have to have the app running all the time. I have enough skills that I could make something that would send an e-mail (which would remove the need for a separate app) but I’m hoping for a solution with more reliability and less latency. It would be nice to have it send a notice no matter where I am but if it were easier to only work when my phone was on the same local network as the button that would be acceptable (I’m trying to figure out a solution that wouldn’t involve a server).
Anyway, just putting this down as a placeholder for when I have some free time to pursue it, but I also figure someone may have done this already and by posting this I’ll find out about it.
I am always optimistic with the New Year, but 2023 has already brought me one disappointment: the death of the Dark Sky weather app.
My friend Ben introduced me to Dark Sky many years ago. Unlike most weather apps, Dark Sky focused on micro-forecasts. It would tell you when rain was imminent, how strong it would be, and how long it would last. It was amazingly useful. When I was at Lollapalooza back in 2017 a torrential downpour hit Chicago – so strong that it shut the festival down. But I remained relatively dry because Dark Sky warned me it was coming with about a 10 minute lead time. That allowed me to run to the subway and get out of the weather before the skies opened up.
For a company like Apple that prides itself on UI/UX you would think they would do a better job of it. This is a screenshot of the Dark Sky app just before midnight on the last day of 2022.
With one click you see the current temperature, the rain outlook, and a timeline for how long the rain will last.
In the Apple Weather app you have to open it, scroll down until you find the precipitation widget (and not the precipitation map), click on that and you can kind of figure out the rain forecast if you look hard enough. Here is the prediction for Wednesday.
I mean, I’ll get used to it, but it is sometimes hard to say goodbye to something you’ve used for years. If the app were open source there is a chance it would live on, but when we opt to use proprietary software we also cede a lot of choices to the software vendor. So it goes.
The thing I like most about my job is that I get to meet and work with amazing people. Recently I traveled to Helsinki, Finland, to attend the MariaDB Server Fest conference. It was a great experience and I met some very talented people, including Monty Widenius himself.
Note: The usual disclaimer that this is my personal blog and what I write here does not necessarily reflect the views of my employer, Amazon Web Services
My role at AWS is to work with open source companies and communities and to act as a liaison between them and Amazon. In thinking about important open source projects one of the first that comes to mind is MariaDB.
When I first got seriously involved in open source back in 2001, the MySQL database was an example of an open source success story. While a lot of the focus of the early days of open source was on the operating system, MySQL demonstrated that open source applications were powerful enough to compete with existing proprietary solutions. Plus, if you were building an open source application, quite often you needed a database, and MySQL provided a great option.
Many of us expected MySQL to IPO, but instead the company was bought by Sun Microsystems. That wasn’t too worrisome since Sun was a big proponent of open source, but when Sun was bought out by Oracle a couple of years later, that all changed.
On the day the acquisition was announced, Monty Widenius (the lead developer of MySQL) announced a fork of MySQL called MariaDB. In the years since then, a lot of people have replaced MySQL with MariaDB. While Oracle has continued to work on MySQL, the last major release, version 8.0, came out in April of 2018 so one must wonder how motivated they are to work on a product that competes with their main proprietary offering.
When I learned about Server Fest I decided to attend. As much as I like the ease of remote communication, sometimes nothing beats meeting face to face. I had also been to Helsinki a couple of time before and I really like the city, although I really should try to visit once in the summer time.
I flew from North Carolina to JFK and then took a Finnair flight to Finland. Helsinki is seven hours ahead of New York, so it is one of those weird trips where you leave in the night and land the following afternoon. When I travel I tend to stay at Marriott properties, but all the Marriott affiliated hotels were booked. I later learned that this was because a popular start-up conference called Slush was happening at the same time as Server Fest. Because of this there was no meeting space for rent, so the MariaDB event was being held at Monty’s house, which I thought was kind of cool.
The conference was on Thursday, November 17th, and was going to be live-streamed on YouTube. In order to better match up with the time zone in New York, it started around 3pm and ran into the night. I arrived mid-morning.
When you walk into Monty’s house, the first thing you notice is that is has a very open floorplan. Directly across from the entryway is a huge table that can probably seat about 20 people, and that’s where most folks had set up their laptops. To the right of that was a large kitchen, and to the left was an open area where the walls were lined with bookcases, and that is where lights and cameras had been set up for the livestream.
Now MariaDB is organized in two parts. There is the MariaDB corporation, which is the main commercial enterprise behind the project, and there is the MariaDB Foundation, which manages project governance and promotion. Both were represented in the day’s presenters, and I also got to meet and spend a lot of time with Kaj Arnö, who is the CEO of the Foundation.
I also got to meet the true boss of the event, Anna Widenius, Monty’s wife. As you can imagine, getting a bunch of open source geeks organized is like herding cats, but she did a great job in getting the conference underway and keeping it moving.
He talked about “Chasing Bugs in Production”. When Wikimedia upgraded from MariaDB 10.4 to 10.6 they ran into a performance issue. His talk describes their upgrade process and how they were able to work with MariaDB to get the issue addressed. I also found it interesting that they run MariaDB on bare metal. So much of today’s IT infrastructure is based on clouds and Kubernetes that it was refreshing to see someone taking advantage of individual servers when it makes sense.
There was a bit of a hiccup with the second speaker who was supposed to join remotely, so Monty Widenius moved his presentation on query optimization in MariaDB to the second slot.
There are several methods that can be used to execute a query against a database. A good database will optimize the query method to choose the best plan to return the result in fastest time. In MariaDB 11, Monty has changed the method to use a cost-based optimizer (versus rule-based) with parameters that can be tuned by the user. This has resulted in more efficient queries and thus a better user experience.
MindsDB allows you to integrate machine learning easily into your database. In his example he used a model trained by Hugging Face to analyze text in order to detect “sentiment” – i.e. is the text positive, negative or neutral. And you access this using SQL queries.
For example, supposed you have a blog or other website where users can submit comments. MindsDB would allow you to examine those comments to detect general sentiment without having to learn an entirely new system. I thought it was pretty cool.
This resonated with me as it focused on building the sponsorship community within MariaDB, for both individuals and entities. MariaDB is an important piece of technology and there are a lot of stakeholders, and this talk really reinforced the idea of a “big tent” environment within the project.
For the next presentation we finally got to hear from Federico Razzoli founder of Vettabase (he was originally scheduled to go second but there was some time zone confusion) as he talked about new MariaDB features to learn “for a happy life”.
He started off with the comment that MariaDB (and open source projects in general) are very good at creating new features and not so good about documenting or advertising them. He discussed the most recent releases of MariaDB and then highlighted various new features that people should find useful.
From what I can tell, the idea behind MVCC is that active databases are constantly processing transactions but there is a need to provide a consistent “view” at a given point in time, so MVCC determines which transactions are supposed to be considered committed at that point in time and which are not. This is to prevent someone who is reading from the database from being served incomplete information.
As transparency is key to any open source project, MariaDB publishes statistics on code contributions. The latest one I can find is through September of this year, and I was happy to see Amazon on the list of contributors to the MariaDB Server code.
Of course the majority of code commits, nearly 80% were done by the MariaDB corporation, and another 14% by the MariaDB Foundation. Amazon represented 1.42% of contributions, but Andrew pointed out to me that they came from 14 unique committers versus 8 from the Foundation. I’d love to see that involvement increase.
The mariadb-binlog is a binary log containing a record of all changes to the databases, both data and structure. There is a command line tool that lets you examine this log which now supports Global Transaction IDs, making it easier to filter transactions.
After the Server Fest stream was over, we got to my favorite part of any conference – the socializing.
I did spend some time talking with Manuel Arostegui. One of my friends, Eric Evans, works at the Wikimedia Foundation focused on Cassandra. It turns out that both Manuel and Eric are in the same department. Small world.
We eventually sat down to dinner prepared by the Widenius’s. Monty cooked a huge beef tenderloin, and we talked, sang songs and drank. I managed to get back to my hotel about 1am the next morning.
Usually when I travel home from Europe my flight will leave around noon and I get back in the early afternoon local time. For some reason those flights were over $1000 more than the Finnair flight that left at 5pm, so I returned to Monty’s on Friday morning to visit for a few hours.
I loved the fact that Monty was so welcoming and also that he and his family keep a lot of animals (we do the same). In addition to six cats and three dogs, there is a boa constrictor named Monty Python who is about two meters long. The story I heard was that it was a gift given to a family member that ended up at Monty’s. They originally thought it was a python but later learned it was a boa, but the name stuck.
The trip home was uneventful except for the fact that I got home home close to 2am and I ended up catching a bad case of influenza. To my knowledge no one else at the conference got sick, for which I’m happy, and while it knocked me out of commission for almost two weeks it was worth it.
Today marks my three month anniversary with AWS, and I’m loving it. It has been a lot of fun returning to conferences, so I thought I’d post a list of the ones I will be attending for the rest of the year.