Content provided by Voice of the DBA. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Voice of the DBA or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.
Player FM - Podcast App Go offline with the Player FM app!
How do you know when it’s time to make your next big career move? With International Women’s Day around the corner, we are excited to feature Avni Patel Thompson, Founder and CEO of Milo. Avni is building technology that directly supports the often overlooked emotional and logistical labor that falls on parents—especially women. Milo is an AI assistant designed to help families manage that invisible load more efficiently. In this episode, Avni shares her journey from studying chemistry to holding leadership roles at global brands like Adidas and Starbucks, to launching her own ventures. She discusses how she approaches career transitions, the importance of unpleasant experiences, and why she’s focused on making everyday life easier for parents. [01:26] Avni's University Days and Early Career [04:36] Non-Linear Career Paths [05:16] Pursuing Steep Learning Curves [11:51] Entrepreneurship and Safety Nets [15:22] Lived Experiences and Milo [19:55] Avni’s In Her Ellement Moment [20:03] Reflections Links: Avni Patel Thompson on LinkedIn Suchi Srinivasan on LinkedIn Kamila Rakhimova on LinkedIn Ipsos report on the future of parenting About In Her Ellement: In Her Ellement highlights the women and allies leading the charge in digital, business, and technology innovation. Through engaging conversations, the podcast explores their journeys—celebrating successes and acknowledging the balance between work and family. Most importantly, it asks: when was the moment you realized you hadn’t just arrived—you were truly in your element? About The Hosts: Suchi Srinivasan is an expert in AI and digital transformation. Originally from India, her career includes roles at trailblazing organizations like Bell Labs and Microsoft. In 2011, she co-founded the Cleanweb Hackathon, a global initiative driving IT-powered climate solutions with over 10,000 members across 25+ countries. She also advises Women in Cloud, aiming to create $1B in economic opportunities for women entrepreneurs by 2030. Kamila Rakhimova is a fintech leader whose journey took her from Tajikistan to the U.S., where she built a career on her own terms. Leveraging her English proficiency and international relations expertise, she discovered the power of microfinance and moved to the U.S., eventually leading Amazon's Alexa Fund to support underrepresented founders. Subscribe to In Her Ellement on your podcast app of choice to hear meaningful conversations with women in digital, business, and technology.…
Content provided by Voice of the DBA. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Voice of the DBA or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.
Content provided by Voice of the DBA. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Voice of the DBA or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.
A report of cloud Kubernetes usage shows that these resources are being under-utiliized, over-provisioned, and costing more than necessary for many organizations. From the previous year, average CPU declined from 13% to 10%, and memory is used at only around 23%. Companies are over-provisioning their clusters, which is understandable. No one wants to have systems overloaded and users complaining about performance. However, this is a similar tension to what we see with virtualization on-premises. Operations people want to leave plenty of CPU/RAM/IO headroom for systems to handle bursting or increasing workloads. Management wants to get all the use they can out of their investment and would prefer we provision systems as closely as possible to their expected workloads. Containers and orchestrators should allow a closer match, but only if there are workloads that burst enough to require additional containers and pods to be deployed. That does happen with memory occasionally at a little over 5% of containers exceed their memory, but that’s not a significant amount. Managing a Kubernetes cluster is a specialized skill and most organizations don’t have the skills or experience to do it well. My view is that if you want to use an orchestrator, you’re better off letting the cloud providers manage the infrastructure and scale up and down as needed. There are autoscaling technologies to help Operations staff better manage their capacity and costs, but this is an additional skill people need. While I do think some companies are adopting cloud native technologies and rewriting their applications to run in containers and Kubernetes clusters, I find many more companies are hesitant to adopt a very complex technology on top of the complexity of teaching their developers to work within containers for their applications. Certainly in the Microsoft space, I don’t see a lot of database servers running in containers. Despite some of the advantages of upgrades and downgrades, the unfamiliarity with the ins and outs of containers leads most teams to continue to manage the database separately. Resource matching to a workload is a problem we’ve had for years and Kubernetes doesn’t make this any easier to deal with. The cloud is supposed to help us better manage our resources, but there is a lot of knowledge needed to do this well. Add in the cost/performance issues in the cloud and it’s no wonder that many companies have overprovisioned their resources to ensure systems continue running. I don’t know whether lots of IT staffers are optimistic about their workload growth or scared of potential problems from overloaded systems, but unless organizations carefully manage all their resources, they are likely to continue to see larger cloud bills than they like. Steve Jones Listen to the podcast at Libsyn, Spotify , or iTunes . Note, podcasts are only available for a limited time online.…
Well, not really the end. I doubt anyone running SQL Server 2019 is going to stop (or upgrade) just because mainstream support ended. Actually, I wonder how many of you know that SQL Server 2019 passed out of mainstream support on Feb 28, 2025. I do think the 6 or 7 of you running Big Data Clusters likely knew this was the end of any support. I saw a report in the Register on this, which includes a survey of which versions are still running. This is from an IT asset firm and matches Brent Ozar’s Population report . 44% of you are running SQL Server 2019, which is the largest percentage. Since there’s an additional 32% of you running versions older than 2019, I’m sure that upgrading isn’t a priority. It seems like just a couple of years ago that SQL Server 2019 was released. At the end of February Microsoft ended mainstream support for this version. There will still be security fixes released, but no more cumulative updates. The Register says if you don’t upgrade, you might run into a bug and not get a fix (unless you buy extended support), but that’s never worried me. If I haven’t hit a bug 5 years in (or likely 3-4years after my last upgrade), I’m not too worried. If I run into something it’s likely from new code and I’ll just change the code to work around the issue. I do expect to run a database platform for a decade, and I am glad that Microsoft continues to supply security patches for this period. While I certainly want every database firewalled, reducing the attack surface area of known vulnerabilities is good. I also find myself less concerned about the security of older versions. If there is a big security vulnerability discovered in 2017 tomorrow that exists in previous version and I had a 2012 server, I’d just prioritize an upgrade then. Upgrades are hard, eat a lot of valuable time, and don’t necessarily provide many benefits. Most applications tend to use basic CRUD features and whatever was available at the time in that version. If I use a tally table to split strings in 2017, I’m unlikely to rewrite that code to use STRING_SPLIT with an ordinal if I upgrade to 2022. That certainly isn’t a selling point for me to upgrade. My boss knows that isn’t something we’d take advantage of in older code. I’m not a bleeding edge person, and I wouldn’t push for upgrades. If you want to stay somewhat current with versions and are running 2019, I’d be waiting to test my application on SQL Server 2025 at the end of the year or early 2026. If I were mandated to stay current, I’d still be doing that, not jumping to 2022 right now. However, I do recommend that everyone patch their systems with cumulative updates to ensure their security is up to date. There have been several security patches in the past few years that you should have applied and if you haven’t, this is a reminder to do so soon. Steve Jones Listen to the podcast at Libsyn, Spotify , or iTunes . Note, podcasts are only available for a limited time online.…
I have run into a lot of people in the last few years that love decoupled software and microservices. It seems many people are aiming to move their work in this direction, and while I see some appeal, I also see tremendous additional complexity that has moved out of the software into the operations and debugging space. As I read the book Observability Engineering , I found myself thinking the complexity of setting up more logging and instrumentation in an observability framework as well as the costs of managing a system start. That caused me to think this is overkill for most software. To be clear, I don’t think that Uber could have been built as a few monoliths, and there are other examples of such systems, but most of us don’t work at that scale. There are lessons to be learned about large, real-time software systems, and certainly Google, Amazon, Spotify, Netflix, etc. can help us understand how certain techniques work better at scale, but most of us don’t work at that scale. Scale to me is thousands of connections and terabytes of data. Those companies tend to work at a couple orders of magnitude above that. Those are also companies that must have real time systems up to survive. The vast majority of companies I’ve worked at might suffer a loss with an outage of a system, but honestly, we could survive a day or two of recovery. Heck, look at all of the companies that have had portions of their digital infrastructure knocked offline from ransomware the last few years. Some failed, but most didn’t. The incident sucked for IT staff, but those companies survived. I still remember the SQL Slammer worm forcing us to take our entire network offline for multiple days at a large software company. We still ran sales, and support, and other systems independently or from paper. Granted that was 20 years ago, but I’m sure Redgate and many other organizations could survive a few weeks with no network. Uber on the other hand, that would be a disaster for them. They would likely survive, but at a high cost. How many people would jump to a new service and never look back? I was reading a piece on coupling and complexity , which I’ll discuss a bit in another article, but it got my thinking of how often I see people overcomplicating their work by trying to move to a decoupled world instead of hiring and training people to adopt better architectures and communicate better. After all, moving to microservices isn’t going to avoid the training issue. You’ll still have to teach people more. And while you’re at it, force every developer to pass a SQL test every year. That might help you build better systems as well. SQL isn’t going away and better SQL coding in will result in much better applications everywhere. Steve Jones Listen to the podcast at Libsyn, Spotify , or iTunes . Note, podcasts are only available for a limited time online.…
The latest code supply chain attack isn’t a direct attack, but a failure of a system designed to be efficient. There is a Go (Go-lang) module that had a malicious module inserted into it years ago. Someone caught this and removed the module from GitHub, but Google had cached this and has kept it alive for the last three years . This attack worked because it relied on the Go Module Proxy services, which prioritized caching. Even when the source changed, the cache wasn’t invalidated or reloaded, which seems like a major ovesight or even larger design flaw, but I know everyone trying to maintain software archives at scale tends to cache a lot. After all, so many developers might load modules in their daily work, in CI, etc. that caching matters. However, the bigger issue is that criminals are getting more enterprising and aiming for supply chain attacks where possible. One would hope that any PRs (pull requests) are carefully examined, but the truth is that a lot of people might depend on their unit tests passing in a CI build. They are not necessarily looking to see if anyone has actually entered malicious code or even poorly written code. I can see some people worried more about code structure and naming (or tabs/spaces) than what the code does. Imagine seeing a change that looks innocuous at the top of a file, all tests pass, and you merge the change, but didn’t notice there was a few new functions added below in the file because they’re not obvious in the UI. Someone maintaining a popular code repo as a side project might be fooled here, but if this were in a corporate repository we might be even more susceptible. We have more reasons to trust the PR isn’t a problem if the code passes tests inside of an organization. After all, who things criminals might insert code into their corporate repo? Not many people do, but we’ve seen quite a few successful supply chain attacks in the past. Who knows how many more we don’t know about. Security is a hard business, and when it’s extended to the code we write, it might even be harder. I know there are security scanning solutions you can integrate into your codebase, but those detect what they know about and criminals keep finding new ways to attack us. Ultimately I think we depend on code maintainers carefully examining PRs from outside their circle of trusted individuals, and even then, things can slip through. Some days I think it’s truly a mad, mad world . Steve Jones Listen to the podcast at Libsyn, Spotify , or iTunes . Note, podcasts are only available for a limited time online.…
Welcome to Player FM!
Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.