Artwork

İçerik Reblaze Technologies Ltd. tarafından sağlanmıştır. Bölümler, grafikler ve podcast açıklamaları dahil tüm podcast içeriği doğrudan Reblaze Technologies Ltd. veya podcast platform ortağı tarafından yüklenir ve sağlanır. Birinin telif hakkıyla korunan çalışmanızı izniniz olmadan kullandığını düşünüyorsanız burada https://tr.player.fm/legal özetlenen süreci takip edebilirsiniz.
Player FM - Podcast Uygulaması
Player FM uygulamasıyla çevrimdışı Player FM !

Episode 17: 99.99999% Uptime with Anna Berenberg

34:20
 
Paylaş
 

Arşivlenmiş dizi ("Etkin olmayan yayın" status)

When? This feed was archived on July 01, 2022 02:28 (2y ago). Last successful fetch was on October 25, 2021 23:04 (2+ y ago)

Why? Etkin olmayan yayın status. Sunucularımız bir süredir geçerli bir podcast beslemesi alamadı

What now? You might be able to find a more up-to-date version using the search function. This series will no longer be checked for updates. If you believe this to be in error, please check if the publisher's feed link below is valid and contact support to request the feed be restored or if you have any other concerns about this.

Manage episode 299519428 series 2968145
İçerik Reblaze Technologies Ltd. tarafından sağlanmıştır. Bölümler, grafikler ve podcast açıklamaları dahil tüm podcast içeriği doğrudan Reblaze Technologies Ltd. veya podcast platform ortağı tarafından yüklenir ve sağlanır. Birinin telif hakkıyla korunan çalışmanızı izniniz olmadan kullandığını düşünüyorsanız burada https://tr.player.fm/legal özetlenen süreci takip edebilirsiniz.

Sponsored by Reblaze, creators of Curiefense

Panelists

Justin Dorfman | Richard Littauer | Tzury Bar Yochay

Guest

Anna Berenberg
Google Cloud

Show Notes

Hello and welcome to Committing to Cloud Native Podcast! It’s the podcast by Reblaze where we talk about open source maintainers, contributors, sustainers, and their experiences in the Cloud Native space. Justin has some shirts and stickers that he wants to give away to fans of this show, so listen right now to find out how to get some! Today, we have a super exciting guest, Anna Berenberg, who is a Distinguished Engineer at Google. Anna goes in depth about her position at Google and what she does there. She tells us about the “Five nines” applications, how proxies are used on a day-to-day cloud basis at Google, and how security and reliability come together with what they’ve done at Google for Envoy. Also, Anna explains how listening and being attentive to people and customers who contribute to open source plays an important role. Download this episode now to learn so much more from Anna!

[00:02:15] Anna tells us what she does as a Distinguished Engineer, what her specialty is, and what Uber TL means.

[00:03:30] Richard reads some things from Anna’s Bio and she describes in depth what it all means.

[00:05:35] Justin asks if Google Search ever goes down because he’s never seen it down, and it seems very critical that it’s always up. Anna tells us all about the “Five nines" applications and she mentions a paper that was recently published called, “Deployment Archetypes for Cloud Applications.”

[00:09:35] Anna explains how she thinks about her role and what her goal is.

[00:11:50] Richard wonders how proxies are used on a day-to-day cloud basis at Google.

[00:13:41] Anna explains what proxyless means and how it works. She mentions a single general-purpose RPC infrastructure called Stubby that Google uses.

[00:19:20] Tzury asks Anna how we can make software more secure, especially critical pieces of actually handling infrastructure, and Anna tells us how security and reliability come together and what they’ve done at Google for Envoy.

[00:23:41] Tzury wonders what kind of things Anna is doing within Google or outside of Google, involving open source related products to make the entry point easier for newcomers and developers who come from different platforms and different technologies to adopt new technologies.

[00:26:43] Speaking of extensions and platforms, Justin asks Anna when Google is going to adopt Curiefense. ☺

[00:29:17] Richard asks Anna how do they incentivize clients, communities, coders, developers, projects, and products like Curiefense to get involved in the planning stage, and what are her teams doing to make sure the needs of Curiefense and other projects, like our listeners may have, are taken into account at a very high level.

[00:32:21] Find out where you can follow Anna online.

Quotes

[00:03:55] “So, we had our own proprietary load balancing proxies and control planes.”

[00:04:01] “And then a time came when we realized that for Google Cloud to achieve its principle, Google Cloud considered itself to be Open Cloud where we embrace open source technologies, and we basically essentially can think about cloud without borders.”

[00:04:33] “And at the same time, Lyft came out with this new, brand new proxy called Envoy Proxy, which had an amazing architecture.”

[00:05:42] “Well, this is actually another passion of mine is a design of Five nines applications.”

[00:11:04] “People should be thinking about policies and not the infrastructure that allows them to propagate and allows them to enforce.”

[00:12:13] “So think about it as like a gateway, the place where I trust the mains meet together.”

[00:14:05] “And what we developed actually, a service mesh before service meshes were cool.”

[00:15:09] “So when I looked at Cloud and how important gRPC is to cloud because it allows for much better productivity and velocity of cloud developers when using Protobuf’s and how well it feeds with the Kubernetes as modernization.”

[00:15:50] “We actually reuse the same API’s that we are using for Envoy.”

[00:16:46] “So, if you think about Gmail or something like this, some big application, or customers that big, that if you put proxy in each home, and you have hundreds of thousands of microservices, the daily job will be restarting proxies.”

[00:27:11] “An interesting conversation to have is how Envoy as a platform can allow co-existence of a functionality, in some cases could be competing, in some cases complimentary.”

[00:30:00] “Well, we’re always listening. I think listening is under appreciated activity and it’s important to listen and important this time, not a single requirement but a collection of requirements.”

Links

Curiefense

Curiefense Twitter

Cloud Native Community Groups-Curifense

community@curiefense.io

Reblaze

Justin Dorfman Twitter

jdorfman@curiefense.io

podcast@curiefense.io

Richard Littauer Twitter

Tzury Bar Yochay Twitter

Anna Berenberg Twitter

Anna Berenberg Linkedin

“Deployment Archetypes for Cloud Applications” by Anna Berenberg, Brad Calder

gRPC

“gRPC Motivation and Design Principles” By Louis Ryan (Blog post)

Envoy

Credits


Transcript

Justin Dorfman 00:00

Hi! it's Justin cohosts of this podcast, we got some new shirts and stickers and we really want to give them away to fans of the show. All we ask is you share one of your favorite episodes on Twitter and DM the link to at Cariefence and if you prefer email drop a line to podcast @curiefence.io.

Thanks and I really hope you enjoy the show.

Anna Berenberg 00:25

It's actually very interesting point about core versus extensions. Which brings us back to the question of reliability. The reason that the core is not kept figure limited is to ensure stability, reliability and security of the core and so let's say Google doesn't need all this extensions, then it can compile them out. It doesn't compile them in as core, and then they can guarantee quality.

Now everybody needs extension [phonetic 00:56] right because Envoy is not just a proxy to platform.

Richard Littauer 01:02

Hello, and welcome to "Committing to Cloud Native, the podcast where we talk about the consequences of open source and cloud native technology. We have a very exciting guest today from Google. But before I get around to introducing her, I want to make sure that the listeners know about the panelists, because our voices are going to come up and it's nice to know who we are. So I'm Richard Littower [phonetic 01:25]. I'm your host today with me is my co-panelists, Justin Dorfman. Justin, how are you?

Justin Dorfman 01:31

I'm great, Richard, how are you?

Richard Littauer 01:33

Always good and my other co-panelists, Tzury, how are you doing?

Tzury Bar Yochay 01:38

I am great, Richard. How are you today?

Richard Littauer 01:41

Also, still good, hasn't changed in the last 10 seconds. Thanks for asking. All right. Getting to our guest today. Our guest is a Distinguished Engineer at Google, who when I asked her to describe in more detail what that means use a lot of words that I would like you all to hear. So I will ask her soon. Again, we have Anna Baron Berg today calling from San Francisco. Anna, how are you doing?

Anna Berenberg 02:04

I'm great. How are you all?

Richard Littauer 02:06

I think we're all still good. Okay. All right. So when I asked you, how else I might introduce you, you said a lot of things about load balancing and so on. What is it that you do as a Distinguished Engineer? What is your specialty?

Anna Berenberg 02:20

I am Uber TL for load balancing products and technologies at Google, I work to make sure that load balancing works for both Google products as well as Google Cloud product and I've been at Google for 15 years doing just that.

Richard Littauer 02:39

So I have a silly question. 15 years is awesome, by the way, wow, I'm unfamiliar with the term Uber TL and that may just be me. But in cases, any of our listeners, can you describe what Uber TL means?

Anna Berenberg 02:49

Well, there are very useful people as engineers who implement code, and then you have TLs, who actually guide them to build and design services and systems and then when the scope becomes too big for one TLs to cover, then you have multiple TLs and then we have Uber TLs, for work with the TLs. So like your article [phonetic 03:14] system, where Uber TLs now guide, and design and architect a whole area, and the families of products.

Richard Littauer 03:24

Technical Lead, got it. Technically, what you've been doing at Google Cloud-- I mean, since 2015, you've had this job is you drove the modernization of Google Cloud-- Application Networking by embracing Immersion Cloud Native Technologies, using onboard proxy with traffic directors, Open Service Mesh and Universal Control Plane that powers all cloud applications, load balancing products, I just read that from your bio, which is why I was able to rattle it off so fast, can you describe in more depth what that all meant?

Anna Berenberg 03:55

So we had our own proprietary load balancing proxies and control planes and then a time came when we realized that for Google Cloud to achieve its principle, Google Cloud considered itself to be open cloud where we embrace Open Source Technologies and we basically, essentially, you can think about cloud without borders because once you use Open Source, then everywhere where the customer runs, they actually can become part of this open ecosystem of Google Cloud and so with that, it became apparent that we need something different than just google proprietary product and at the same time, Lyft came out with this new brand new proxy called Envoy Proxy, which had an amazing architecture and I totally fell in love with that, to be honest, because it was such a beautifully design proxy and with that, I made a proposal and the proposal got approved that we actually rebate and build all our new products and rebates, existing ones on our wall as a proxy and it's been quite wonderful experience and ride that philosophy because it allows actually to embrace bigger planes that run pure Envoy on somewhere else, whether it's on premises on the cloud, and use our own control plane to manage them, as well as our managed LB products.

Justin Dorfman 05:27

So, ever since I've been using Google's it's like 98, it's seems like it's never gone down and is that true? Like, does Google Search ever go down? I've never seen it down. It seems like it's very critical that it's always up.

Anna Berenberg 05:42

Well, this is actually another passion of mine is a design of Five nines applications and what five nines means? It's basically never done. It's the applications that basically always up 24x7, and then how do you deal with that? What kind of architecture would you build and with that, Google embraced what's called Mobile Applications and that's why searches never down because it could be down somewhere? But the traffic management and load balancing and routing, and everything is bound around places, zones, or regions are basically everywhere, where they stack potential maybe unhealthy, then it works around it and serves it from other Healthy Places and the same thing inside of the application itself, the same technology is being used to make sure that you can always build around faulty hardware faulty software, everything can fail and then you still build an application to be able to survive it. Just out of interest, not for self-promotion. We published recently, a paper titled deployment archetypes for cloud application that talks specifically about all sorts of deployment and global deployment as part of it.

Justin Dorfman 07:05

Thank you, because that makes a lot of sense and was your team responsible for building that out? Because it seems like you know a lot about that type of part of the search engine?

Anna Berenberg 07:17

No, not my team but I build and work with teams then build infrastructure for the services, but the services become our customers and what we want to do is actually solve customer problems, and how would you solve customer problems unless you understand what problems they're facing and so we get to listen both the internal Google customers as well as external Google Cloud customers to understand what are their requirements? How do they want to build this Five nines applications? What is the compliance requirements that also comes with that? Yeah, so it's all about listening.

Tzury Bar Yochay 07:56

I would say to me, Anna, your ESDN [phonetic 08:00], and you actually defining this software defined network, which people may not be aware of, because those tremendous trends and revolutionary takes 99% in the backend side of the story, at backbone of the infrastructure of the cloud, but Cloud ability, agility, scalability, and all superlative and we can attach the cloud is mainly available for the fact that data center, were used to be based on appliances and hardware are became what we call cloud, which is mainly based on software stock, performing, doing all the routing and all the great work of networking and switching and whatnot, firewalling, etc., etc., etc.

The world is stepping towards these Software Defined Network, I believe, I don't know numbers or predictions, but they talk about ISPs, one day also transforming to that type of infrastructure, 5g and so on, and so forth and then we are talking to you while you're actually leading teams and doing design and defining, eventually the software defined network. So I wonder sometimes you get to think of your part in the evolution of the significant role you've been playing. It's such a humble way, all those years.

Anna Berenberg 09:38

I'm thinking about my role in it that way. I think we live in a very interesting time period And like you said, we add the causal [phonetic 09:50] of SDN becoming requirement everywhere, because all the customers and all the users want to have policy based workflows, I would say.

So, everything has to become policy base. How do you propagate policy? How do you get this policy enforced? How do you do it in uniform ways, so that the consumers don't have to think, 'Oh, I'm going to enforce one policy on a hope and another policy and another hope because it comes from different providers or different functionality? How do we bring networking together under one umbrella in a way that customer can define a single policy, let's say, Access Policy...? Who can patch this bucket? Or who can talk to the service? Or who can actually go to internet and how all these policies can be defined in a very simple way and enforced in all network paths, no matter how many network paths are there? And what equipment or what products do they use?

If again, I think this will change how people think about networking. In fact, they probably stop thinking about networking, they start thinking about security and that's the goal of it. People should be thinking about policies, and not the infrastructure that allows them to propagate and allows them to enforce. So yes, my goal is for people to stop thinking about network.

Richard Littauer 11:21

So I have a very silly question, because I'm just not the expert and I know that Justin and Tzury work on this stuff all the time with Curiefence. I don't... I'm largely here on this podcast helping to guide guests along but I do need to ask me, and you seem to be the expert. So I wanted to ask you, when I think of a proxy, I think of what I use to stop people knowing that I'm using the Pirate Bay to download Star Trek episodes. That's obviously not the only use for proxies. How our proxies use on a day to day cloud basis at Google.

Anna Berenberg 11:54

There are multiple reasons for using proxy. At some sense, you can think of it as a choke point. So anywhere where you want to choke, access to something and enforce policy and make sure that there is a control over traffic you put the proxy in there. That's one way. So think about this, like a gateway, the place where trust domains meet together.

Let's say you have one team of people who trust. Within the trust domain, you have another team of people who trust and then you want to connect this two together. To connect this two together, you need a thing in between that would allow basically somebody authoritatively say, OK, this traffic can come in, this traffic cannot come in, this traffic should be augmented, this traffic should be thrown away, whatever.

The other part of the proxy is, especially of reverse proxies are more balanced and so behind proxy becomes a load balancer. So you are able to hide actual workloads and size of workloads and deployments or workloads from the people who consume the service and then you can scale up and scale down and basically all of that happens on the back end intelligently and no matter how much traffic is, uncommon proxy can distribute it in an optimal way to actually serve the consumer, as well as not overload a workload behind.

Richard Littauer 13:26

Awesome. That's actually really helpful for me, because I was confused how proxies interface with load balancers. Now I see that they're basically the same thing. It's just another term. That's really great. I know that you define one of the first gRPC proxy lists mesh, if not the first. Proxy less seems to-- If proxies are so useful, what is proxy less mean, and how does that work?

Anna Berenberg 13:46

Well, actually, me defining it is probably too much of taking a credit what I've done, I looked at how Google internally used service to service communication, and we have a proprietary protocols study that has historically been used at Google and what we developed actually a service mesh, before service meshes were cool and what that service mesh, what it did? It had a control plane, and it has a data plane and then control plane actually manages data plane, which is part of the study transport and framework that is linked into the client code and the server code. So it's a little different from regular service mesh.

And the reason regular service meshes as we know it day needs two proxies on both sides, the protocol is mostly HTTP. So you cannot change the protocol in a way to fit into the traffic management as part of the application.

So what happened here in our case, we have that is service to service communication that has been all micro services and services within Google and you can imagine, there are 100s of 1000s of micro services and services and it's proven itself very successful. So when I looked at Cloud and how important gRPCs to cloud because it allows for much better productivity and velocity of cloud developers when using protobufs, and how well it feels with a Kubernetes as modernization, while Kubernetes, modernization of compute and control plane, the gRPCs is a modernization of application network.

So looking at this together, it was pretty easy to make a next step and say, OK, we are going to take a service mesh that we developed inside of Google, which is Proxima Service Mesh, and we're going to put it on gRPC.

How do we do that?

We actually reuse the same API's that we using for Envoy and so you have a single control plane, and you have a data plane that is compliant to this API, similarly to Envoy proxy and now you can mix and match one, you can have a proxy service and client communication, where all the traffic management, security etc., is on this side.

So the developer doesn't have to worry about them, it's done for them. While actually, it doesn't also need a proxy and what it helps with in a large, it helps with 2 points:

  1. It improves latency, because going through the proxy. As latency, it's not that important for regular application, it is super important for latency critical application that measure every millisecond and less than millisecond, as well, it's important for a very large deployment.

So if you think about Gmail, or something like that some big application or customers that big that if you put proxy in each hope and you have this 100s of 1000s of micro services, your daily job will be restarting proxies.

So the proxy lists solve the problem of Lifecycle Management of proxies also. So the way we've done it, we actually allow them to coexist. So in a Single Service Mesh, you can have both proxies [phonetic 17:19] and proxy.

Justin Dorfman 17:21

So it's not just like some big corn job. It's actually like, we're optimized.

Tzury Bar Yochay 17:26

This is mind blowing JD, this is my blowing issue, I'm telling you. What's going on right now.

Anna Berenberg 17:31

So, because API's are the same, you're going to get a feature parity, not full feature parity, because obviously, a proxy is--- By definition, we can have more functionality, because it's out of the process, it can restart independently and if it crashes, the process, doesn't crash itself, etc.

Tzury Bar Yochay 17:54

I know one of the things that actually scares me, sometimes literally keeps me up at night, is that we live in a world of; we know that hardware is less bugs than software. That's a fact and there are reasons for it, logic gates, etc.

Anna Berenberg 17:54

So we have to be lots more careful than [unintelligible 17:56] but it indeed allows for coexistence of applications that requires super high latency requirement, a super low latency requirement, as well as, cannot afford to have so many proxies in deployment.

Tzury Bar Yochay 18:34

Software, as they put it out in understand orbits [phonetic 18:37] at a time, software is eating the world. Now software is everywhere. Security defined by software. Networking is defined by software, even proxy and proxy lists are software defined, transportation and routing and so on, and then we all do some amazing tech stack he just described. How can we sleep at night, knowing that software, which not to be-- You know that every software has bugs, and those bugs can easily be exploited, taken advantage by malicious entities to do all those attacks and breaches and so on? How do we make software more secure, especially critical pieces of actually handling infrastructure? Where are the guarantees the gatekeepers you put in place in your work and your team work, day to day to ensure robustness and safety and security?

Anna Berenberg 19:38

Yes, this is an excellent question and that's where security and reliability come together and one of the things that we've done, let's say at Google for Envoy, re-founded the whole team that is responsible for security or Envoy and the focus on reliability for us, it's very important. That Envoy doesn't have exploits that, it will never be crashed because we're using it for as our as foundation for our product. That's one thing.

Another thing is in general culture of the teams and I think Google has a culture very much tilted towards reliability. People think about reliability when they design systems, when they hold it up. There are a lot of tools to make sure that testing is properly done. There is a fuzzing tools. Fuzzing is required for interfaces. There are all sorts of paranoia that is healthy paranoia that is being deployed and required when you build infrastructure, and what to say that reliability is more important than any functionality we can add.

In some cases, we can think that the integrated just because we wanted to make it more reliable. So it's the integration is not necessarily comes as the features that customers see how reliable we make the systems for the cost.

Tzury Bar Yochay 21:24

Can we talk a little bit more about Envoy in--- I'm not sure how far can you go in terms of things which are proprietary within Google, because it's not a secret that Google has its own flavors? Floros [phonetic 21:38] is to be a flavor of hungry now it's probably, I'm assuming flavors of fan boy when we started curious fans, we looked at OK, what platform do we want hooked on as a first iteration of the product of this solution, and we picked Envoy not being aware of the very fact products, Google Cloud products, and apparently AWS and Azure, they are all fallowing Google as usual and rewriting they're on cloud stock on top of Envoy.

So Envoy to some extent, if not yet will soon become the operating system, sort of the operating platform of the cloud networking application networking. Now, there is a barrier though, Envoy in terms of having it that in one end build the architecture, you mentioned that he's providing API and extensibility in first place.

So, Envoys on feature as we had snow one of the maintainer so far we explained to us that. When they discuss a feature of Envoy they first look; can we implement the feature as an extensibility, using the Envoy core API [phonetic 22:54], and very rarely, they will change the core. In most of the cases, Envoy development is done as Envoy extensions, this is how we have built and extend and expand.

So these very, I would say, extendibility, multiple API and extension need to take into account in order to get started, provide some sort of barrier to beginners to begin with but on the other end, it's like using the Veeam as an editor, if you go with the first day or two of the hustle, then you don't give up, then you find yourself in heaven. In a peaceful mind, you know, you get to the right place.

So, what are things that you doing, I would say, with a single girl are probably also outside of Google, having involving Open Source related products to make the entry point easier for newcomers, for developers, who come from different platform, different technology to adopt new technologies.

Anna Berenberg 24:04

It's actually a very interesting point about core versus extensions which brings us back to the question of reliability. The reason that the core is not kept very limited, is to ensure stability, reliability, and security of the core and so let's say Google doesn't need all this extensions, then it can compile them out, it doesn't compile them in, has core and then they can guarantee quality. Now everybody needs extensions because Envoy is not just approximate platform, as he said each of all the functionalities built as an extension.

One of the things I think is going to simplify in the future onboarding and extensibility.

There are two things:

  1. One is standardized interfaces for remote filters. There is one type of authorization remote filters, like extra amount of Z that has predefined API. So for that you weren't going to need to touch Envoy proxy all together, you can run your filter collocated if it's authorization filter, so you can build the services based on that.
  2. And the other one that is now in development from Google Developers actually, is called External Processing.

So that one also is a gRPC callout from proxy and I don't call out to complicated service and this service can actually do a modification of request, if needed. So that would allow onboarding of people without ever touching a proxy itself, the only thing that does need to change is a configuration.

So the configuration is given by control plane to proxy and then the proxy will call out remote services. This is like totally mind boggling.

And the second option is Wasm proxy, that will allow developers to actually compile their code all together independently, the ABI, are going to be standardized also. So it's going to be a standard way of getting the data in and getting the data out and that's it. So I think you rightfully saw yourself that usability should be a concern, because more and more people using Envoy to implement-- Again, SDN, right? It's all about value added services on top of this platform.

Justin Dorfman 26:40

Speaking of extensions and platforms, when is Google going to adopt Curiefence? I mean, we have a thriving community and you know, we have room for one more organization to start using Curiefence. [crosstalk 26:55] Oh, come on.

Richard Littauer 27:00

You're going to scare away all our guests.

Tzury Bar Yochay 27:05

This is my KPI. This is [crosstalk 27:11]

Anna Berenberg 27:11

The interesting conversation to have is how Envoy as a platform can allow coexistence of a functionality, in some cases, could be competing, in some cases complimentary, how can they build the offering, where the consumer can decide I want some functionality from this product for Cloud Provider Native Offering versus I want to add on, like, in this case, Google Cloud has a product, which is in the same space as Curiefence, which is called Cloud Armor. So it's an interesting proposition to develop an ecosystem like that.

Tzury Bar Yochay 27:53

Justin, you were not aware of but you should be aware at this point that Anna is a great inspiration on us while we were in the very early days of Curiefence. I had a call with Anna project that probably will follow you up on this podcast app and Matt Klein, that when we discuss actually cure Fest, 5g, its architecture around boy and so on and so forth. So the contribution from Google is given already JD on a day to day,

Justin Dorfman 28:22

[crosstalk 28:23] You know what, I don't know that and it should be probably in our about us, because that's all I know, I joined two months after the release, and I was half joking. I mean, obviously, we would love it. But no, I wasn't trying to put you on the spot or anything.

Richard Littauer 28:44

Justin, You're ridiculous. Alright, here is a follow up. So you do amazing work at Google, you lead many technical leads and you work really hard to make sure that this work is happening and a lot of what you're doing has been groundbreaking, which is awesome. A lot of the stuff is Open Source, gRPC is Open Source, which is really great. Anyone can get involved and when you look at the code, I mean, part of the standards and the principles for gRPC available, gRPC.io. is free and open, allow anyone to look at what's there.

My question is how you incentivize clients, communities, coders, developers, projects, products, like Curiefence to get involved in the planning stage, in the stage where you're working, because Google is doing all this work at the top strategizing and it's easy to say it's Open Source at the bottom. It's easy to say, Well, we've made it open source. Go ahead and use it if you want. But it's much harder to integrate feedback from the community at the stage where it's how we're building all of this stuff and strategizing our knowledge base. So what are you doing? What are your team's doing to make sure that the needs of your offense and other projects like your listeners might have are taking into account I'd like to very high level?

Anna Berenberg 29:38

Well, we always listening, I think listening is under appreciated activity and it's important to listen and important to understand, not a single requirement, but a collection of requirements because when you have requirements from multiple people, then you're not looking at solving upon problem, you are looking at solving an actually generic problem.

So we always listening and being attentive to people who both contribute to Open Source as well to our customers and interestingly enough, our customers also contributors to Open Source.

So this creates an ecosystem in which we provide value added services on top of Open Source, while our customers take advantage of our value added proposition, they actually improve Open Source offerings as well and they make them more acceptable to them what especially on the consumption side, are they going to produce Kubernetes operators then configure whatever we want, whatever they need, while we providing GCP API's.

There are a lot of this collaboration, natural organic collaboration happening between Open Source community, that let's say a lot of our customers and the community that are our customers, as well as customers who actually are not very much interested in Open Source, but they will use Cloud Native. So you have a spectrum of customers, and all you need is to listen, to understand what they're missing.

Richard Littauer 31:46

I love that answer. Thank you. We are coming up on time. So one of the questions I have Oh, aren't we started? I want to keep going, she's fine. Okay, well, then go ahead and edit your question.

You know, Paul, keep that in, I want it to be real and show that we want to keep going with Anna? [crosstalk 32:05] Something Anna, I don't want you to like-- Well, I want to abuse this privilege.

Anna Berenberg 32:14

You all funny.

Richard Littauer 32:17

See, Mom, I'm funny.

Justin Dorfman 32:19

So my final question was, where can people listen to you? And do you have any final thoughts and if not, where can people find your final thoughts elsewhere? Your other thoughts on this sort of stuff, do you have a Twitter account blog?

Anna Berenberg 32:31

I want to think of myself as a person anybody has to listen.

Justin Dorfman 32:38

In fact, listening is an under estimated activity that is from you.

Richard Littauer 32:43

Also, I think this was really interesting to listen to you have such a depth of knowledge and your ease of explanation really shows that you're able to take these really complex stuff and just say, 'Yes, here's how we do it. It's pretty cool and it's like, really great. So you're definitely someone worth listening to.

Anna Berenberg 33:00

Yes, so I have to return. I'm posting some of it. I have LinkedIn account, but now I don't have a blog.

Richard Littauer 33:07

That's okay. What's your Twitter account?

Anna Berenberg 33:09

It's [unintelligible 3:10] it's K-N-I-G-A and it's from Russian. It's book in Russian.

Richard Littauer 33:19

I love that. Yeah. Well, thank you so much for coming on this podcast. It was really great and I really appreciate you sharing your knowledge. I'm thinking about the etymology of Envoy because Envoy means messenger and as we all know; the Roman god of messengers was also the Roman god of flowers. So we talk of flavors, we could also think of a bouquet of different Envoy things and so I was trying to show like something around this was just really great and now I feel like I've walked through a garden of beautiful flowers. So thank you so much.

Justin Dorfman 33:52

I love how Richard you go. Everyone knows. I don't know and now [laughter] I always learned something like either with the linguist or a couple of new words that you define. It's a very interesting, not only do I learned about committing to Cloud Native, I also get to learn about words, big words with Richard.

Richard Littauer 34:11

Thank you, so much.

Anna Berenberg 34:13

Thank you.

Richard Littauer 34:14

Thank you.

Special Guest: Anna Berenberg.

Sponsored By:

  continue reading

25 bölüm

Artwork
iconPaylaş
 

Arşivlenmiş dizi ("Etkin olmayan yayın" status)

When? This feed was archived on July 01, 2022 02:28 (2y ago). Last successful fetch was on October 25, 2021 23:04 (2+ y ago)

Why? Etkin olmayan yayın status. Sunucularımız bir süredir geçerli bir podcast beslemesi alamadı

What now? You might be able to find a more up-to-date version using the search function. This series will no longer be checked for updates. If you believe this to be in error, please check if the publisher's feed link below is valid and contact support to request the feed be restored or if you have any other concerns about this.

Manage episode 299519428 series 2968145
İçerik Reblaze Technologies Ltd. tarafından sağlanmıştır. Bölümler, grafikler ve podcast açıklamaları dahil tüm podcast içeriği doğrudan Reblaze Technologies Ltd. veya podcast platform ortağı tarafından yüklenir ve sağlanır. Birinin telif hakkıyla korunan çalışmanızı izniniz olmadan kullandığını düşünüyorsanız burada https://tr.player.fm/legal özetlenen süreci takip edebilirsiniz.

Sponsored by Reblaze, creators of Curiefense

Panelists

Justin Dorfman | Richard Littauer | Tzury Bar Yochay

Guest

Anna Berenberg
Google Cloud

Show Notes

Hello and welcome to Committing to Cloud Native Podcast! It’s the podcast by Reblaze where we talk about open source maintainers, contributors, sustainers, and their experiences in the Cloud Native space. Justin has some shirts and stickers that he wants to give away to fans of this show, so listen right now to find out how to get some! Today, we have a super exciting guest, Anna Berenberg, who is a Distinguished Engineer at Google. Anna goes in depth about her position at Google and what she does there. She tells us about the “Five nines” applications, how proxies are used on a day-to-day cloud basis at Google, and how security and reliability come together with what they’ve done at Google for Envoy. Also, Anna explains how listening and being attentive to people and customers who contribute to open source plays an important role. Download this episode now to learn so much more from Anna!

[00:02:15] Anna tells us what she does as a Distinguished Engineer, what her specialty is, and what Uber TL means.

[00:03:30] Richard reads some things from Anna’s Bio and she describes in depth what it all means.

[00:05:35] Justin asks if Google Search ever goes down because he’s never seen it down, and it seems very critical that it’s always up. Anna tells us all about the “Five nines" applications and she mentions a paper that was recently published called, “Deployment Archetypes for Cloud Applications.”

[00:09:35] Anna explains how she thinks about her role and what her goal is.

[00:11:50] Richard wonders how proxies are used on a day-to-day cloud basis at Google.

[00:13:41] Anna explains what proxyless means and how it works. She mentions a single general-purpose RPC infrastructure called Stubby that Google uses.

[00:19:20] Tzury asks Anna how we can make software more secure, especially critical pieces of actually handling infrastructure, and Anna tells us how security and reliability come together and what they’ve done at Google for Envoy.

[00:23:41] Tzury wonders what kind of things Anna is doing within Google or outside of Google, involving open source related products to make the entry point easier for newcomers and developers who come from different platforms and different technologies to adopt new technologies.

[00:26:43] Speaking of extensions and platforms, Justin asks Anna when Google is going to adopt Curiefense. ☺

[00:29:17] Richard asks Anna how do they incentivize clients, communities, coders, developers, projects, and products like Curiefense to get involved in the planning stage, and what are her teams doing to make sure the needs of Curiefense and other projects, like our listeners may have, are taken into account at a very high level.

[00:32:21] Find out where you can follow Anna online.

Quotes

[00:03:55] “So, we had our own proprietary load balancing proxies and control planes.”

[00:04:01] “And then a time came when we realized that for Google Cloud to achieve its principle, Google Cloud considered itself to be Open Cloud where we embrace open source technologies, and we basically essentially can think about cloud without borders.”

[00:04:33] “And at the same time, Lyft came out with this new, brand new proxy called Envoy Proxy, which had an amazing architecture.”

[00:05:42] “Well, this is actually another passion of mine is a design of Five nines applications.”

[00:11:04] “People should be thinking about policies and not the infrastructure that allows them to propagate and allows them to enforce.”

[00:12:13] “So think about it as like a gateway, the place where I trust the mains meet together.”

[00:14:05] “And what we developed actually, a service mesh before service meshes were cool.”

[00:15:09] “So when I looked at Cloud and how important gRPC is to cloud because it allows for much better productivity and velocity of cloud developers when using Protobuf’s and how well it feeds with the Kubernetes as modernization.”

[00:15:50] “We actually reuse the same API’s that we are using for Envoy.”

[00:16:46] “So, if you think about Gmail or something like this, some big application, or customers that big, that if you put proxy in each home, and you have hundreds of thousands of microservices, the daily job will be restarting proxies.”

[00:27:11] “An interesting conversation to have is how Envoy as a platform can allow co-existence of a functionality, in some cases could be competing, in some cases complimentary.”

[00:30:00] “Well, we’re always listening. I think listening is under appreciated activity and it’s important to listen and important this time, not a single requirement but a collection of requirements.”

Links

Curiefense

Curiefense Twitter

Cloud Native Community Groups-Curifense

community@curiefense.io

Reblaze

Justin Dorfman Twitter

jdorfman@curiefense.io

podcast@curiefense.io

Richard Littauer Twitter

Tzury Bar Yochay Twitter

Anna Berenberg Twitter

Anna Berenberg Linkedin

“Deployment Archetypes for Cloud Applications” by Anna Berenberg, Brad Calder

gRPC

“gRPC Motivation and Design Principles” By Louis Ryan (Blog post)

Envoy

Credits


Transcript

Justin Dorfman 00:00

Hi! it's Justin cohosts of this podcast, we got some new shirts and stickers and we really want to give them away to fans of the show. All we ask is you share one of your favorite episodes on Twitter and DM the link to at Cariefence and if you prefer email drop a line to podcast @curiefence.io.

Thanks and I really hope you enjoy the show.

Anna Berenberg 00:25

It's actually very interesting point about core versus extensions. Which brings us back to the question of reliability. The reason that the core is not kept figure limited is to ensure stability, reliability and security of the core and so let's say Google doesn't need all this extensions, then it can compile them out. It doesn't compile them in as core, and then they can guarantee quality.

Now everybody needs extension [phonetic 00:56] right because Envoy is not just a proxy to platform.

Richard Littauer 01:02

Hello, and welcome to "Committing to Cloud Native, the podcast where we talk about the consequences of open source and cloud native technology. We have a very exciting guest today from Google. But before I get around to introducing her, I want to make sure that the listeners know about the panelists, because our voices are going to come up and it's nice to know who we are. So I'm Richard Littower [phonetic 01:25]. I'm your host today with me is my co-panelists, Justin Dorfman. Justin, how are you?

Justin Dorfman 01:31

I'm great, Richard, how are you?

Richard Littauer 01:33

Always good and my other co-panelists, Tzury, how are you doing?

Tzury Bar Yochay 01:38

I am great, Richard. How are you today?

Richard Littauer 01:41

Also, still good, hasn't changed in the last 10 seconds. Thanks for asking. All right. Getting to our guest today. Our guest is a Distinguished Engineer at Google, who when I asked her to describe in more detail what that means use a lot of words that I would like you all to hear. So I will ask her soon. Again, we have Anna Baron Berg today calling from San Francisco. Anna, how are you doing?

Anna Berenberg 02:04

I'm great. How are you all?

Richard Littauer 02:06

I think we're all still good. Okay. All right. So when I asked you, how else I might introduce you, you said a lot of things about load balancing and so on. What is it that you do as a Distinguished Engineer? What is your specialty?

Anna Berenberg 02:20

I am Uber TL for load balancing products and technologies at Google, I work to make sure that load balancing works for both Google products as well as Google Cloud product and I've been at Google for 15 years doing just that.

Richard Littauer 02:39

So I have a silly question. 15 years is awesome, by the way, wow, I'm unfamiliar with the term Uber TL and that may just be me. But in cases, any of our listeners, can you describe what Uber TL means?

Anna Berenberg 02:49

Well, there are very useful people as engineers who implement code, and then you have TLs, who actually guide them to build and design services and systems and then when the scope becomes too big for one TLs to cover, then you have multiple TLs and then we have Uber TLs, for work with the TLs. So like your article [phonetic 03:14] system, where Uber TLs now guide, and design and architect a whole area, and the families of products.

Richard Littauer 03:24

Technical Lead, got it. Technically, what you've been doing at Google Cloud-- I mean, since 2015, you've had this job is you drove the modernization of Google Cloud-- Application Networking by embracing Immersion Cloud Native Technologies, using onboard proxy with traffic directors, Open Service Mesh and Universal Control Plane that powers all cloud applications, load balancing products, I just read that from your bio, which is why I was able to rattle it off so fast, can you describe in more depth what that all meant?

Anna Berenberg 03:55

So we had our own proprietary load balancing proxies and control planes and then a time came when we realized that for Google Cloud to achieve its principle, Google Cloud considered itself to be open cloud where we embrace Open Source Technologies and we basically, essentially, you can think about cloud without borders because once you use Open Source, then everywhere where the customer runs, they actually can become part of this open ecosystem of Google Cloud and so with that, it became apparent that we need something different than just google proprietary product and at the same time, Lyft came out with this new brand new proxy called Envoy Proxy, which had an amazing architecture and I totally fell in love with that, to be honest, because it was such a beautifully design proxy and with that, I made a proposal and the proposal got approved that we actually rebate and build all our new products and rebates, existing ones on our wall as a proxy and it's been quite wonderful experience and ride that philosophy because it allows actually to embrace bigger planes that run pure Envoy on somewhere else, whether it's on premises on the cloud, and use our own control plane to manage them, as well as our managed LB products.

Justin Dorfman 05:27

So, ever since I've been using Google's it's like 98, it's seems like it's never gone down and is that true? Like, does Google Search ever go down? I've never seen it down. It seems like it's very critical that it's always up.

Anna Berenberg 05:42

Well, this is actually another passion of mine is a design of Five nines applications and what five nines means? It's basically never done. It's the applications that basically always up 24x7, and then how do you deal with that? What kind of architecture would you build and with that, Google embraced what's called Mobile Applications and that's why searches never down because it could be down somewhere? But the traffic management and load balancing and routing, and everything is bound around places, zones, or regions are basically everywhere, where they stack potential maybe unhealthy, then it works around it and serves it from other Healthy Places and the same thing inside of the application itself, the same technology is being used to make sure that you can always build around faulty hardware faulty software, everything can fail and then you still build an application to be able to survive it. Just out of interest, not for self-promotion. We published recently, a paper titled deployment archetypes for cloud application that talks specifically about all sorts of deployment and global deployment as part of it.

Justin Dorfman 07:05

Thank you, because that makes a lot of sense and was your team responsible for building that out? Because it seems like you know a lot about that type of part of the search engine?

Anna Berenberg 07:17

No, not my team but I build and work with teams then build infrastructure for the services, but the services become our customers and what we want to do is actually solve customer problems, and how would you solve customer problems unless you understand what problems they're facing and so we get to listen both the internal Google customers as well as external Google Cloud customers to understand what are their requirements? How do they want to build this Five nines applications? What is the compliance requirements that also comes with that? Yeah, so it's all about listening.

Tzury Bar Yochay 07:56

I would say to me, Anna, your ESDN [phonetic 08:00], and you actually defining this software defined network, which people may not be aware of, because those tremendous trends and revolutionary takes 99% in the backend side of the story, at backbone of the infrastructure of the cloud, but Cloud ability, agility, scalability, and all superlative and we can attach the cloud is mainly available for the fact that data center, were used to be based on appliances and hardware are became what we call cloud, which is mainly based on software stock, performing, doing all the routing and all the great work of networking and switching and whatnot, firewalling, etc., etc., etc.

The world is stepping towards these Software Defined Network, I believe, I don't know numbers or predictions, but they talk about ISPs, one day also transforming to that type of infrastructure, 5g and so on, and so forth and then we are talking to you while you're actually leading teams and doing design and defining, eventually the software defined network. So I wonder sometimes you get to think of your part in the evolution of the significant role you've been playing. It's such a humble way, all those years.

Anna Berenberg 09:38

I'm thinking about my role in it that way. I think we live in a very interesting time period And like you said, we add the causal [phonetic 09:50] of SDN becoming requirement everywhere, because all the customers and all the users want to have policy based workflows, I would say.

So, everything has to become policy base. How do you propagate policy? How do you get this policy enforced? How do you do it in uniform ways, so that the consumers don't have to think, 'Oh, I'm going to enforce one policy on a hope and another policy and another hope because it comes from different providers or different functionality? How do we bring networking together under one umbrella in a way that customer can define a single policy, let's say, Access Policy...? Who can patch this bucket? Or who can talk to the service? Or who can actually go to internet and how all these policies can be defined in a very simple way and enforced in all network paths, no matter how many network paths are there? And what equipment or what products do they use?

If again, I think this will change how people think about networking. In fact, they probably stop thinking about networking, they start thinking about security and that's the goal of it. People should be thinking about policies, and not the infrastructure that allows them to propagate and allows them to enforce. So yes, my goal is for people to stop thinking about network.

Richard Littauer 11:21

So I have a very silly question, because I'm just not the expert and I know that Justin and Tzury work on this stuff all the time with Curiefence. I don't... I'm largely here on this podcast helping to guide guests along but I do need to ask me, and you seem to be the expert. So I wanted to ask you, when I think of a proxy, I think of what I use to stop people knowing that I'm using the Pirate Bay to download Star Trek episodes. That's obviously not the only use for proxies. How our proxies use on a day to day cloud basis at Google.

Anna Berenberg 11:54

There are multiple reasons for using proxy. At some sense, you can think of it as a choke point. So anywhere where you want to choke, access to something and enforce policy and make sure that there is a control over traffic you put the proxy in there. That's one way. So think about this, like a gateway, the place where trust domains meet together.

Let's say you have one team of people who trust. Within the trust domain, you have another team of people who trust and then you want to connect this two together. To connect this two together, you need a thing in between that would allow basically somebody authoritatively say, OK, this traffic can come in, this traffic cannot come in, this traffic should be augmented, this traffic should be thrown away, whatever.

The other part of the proxy is, especially of reverse proxies are more balanced and so behind proxy becomes a load balancer. So you are able to hide actual workloads and size of workloads and deployments or workloads from the people who consume the service and then you can scale up and scale down and basically all of that happens on the back end intelligently and no matter how much traffic is, uncommon proxy can distribute it in an optimal way to actually serve the consumer, as well as not overload a workload behind.

Richard Littauer 13:26

Awesome. That's actually really helpful for me, because I was confused how proxies interface with load balancers. Now I see that they're basically the same thing. It's just another term. That's really great. I know that you define one of the first gRPC proxy lists mesh, if not the first. Proxy less seems to-- If proxies are so useful, what is proxy less mean, and how does that work?

Anna Berenberg 13:46

Well, actually, me defining it is probably too much of taking a credit what I've done, I looked at how Google internally used service to service communication, and we have a proprietary protocols study that has historically been used at Google and what we developed actually a service mesh, before service meshes were cool and what that service mesh, what it did? It had a control plane, and it has a data plane and then control plane actually manages data plane, which is part of the study transport and framework that is linked into the client code and the server code. So it's a little different from regular service mesh.

And the reason regular service meshes as we know it day needs two proxies on both sides, the protocol is mostly HTTP. So you cannot change the protocol in a way to fit into the traffic management as part of the application.

So what happened here in our case, we have that is service to service communication that has been all micro services and services within Google and you can imagine, there are 100s of 1000s of micro services and services and it's proven itself very successful. So when I looked at Cloud and how important gRPCs to cloud because it allows for much better productivity and velocity of cloud developers when using protobufs, and how well it feels with a Kubernetes as modernization, while Kubernetes, modernization of compute and control plane, the gRPCs is a modernization of application network.

So looking at this together, it was pretty easy to make a next step and say, OK, we are going to take a service mesh that we developed inside of Google, which is Proxima Service Mesh, and we're going to put it on gRPC.

How do we do that?

We actually reuse the same API's that we using for Envoy and so you have a single control plane, and you have a data plane that is compliant to this API, similarly to Envoy proxy and now you can mix and match one, you can have a proxy service and client communication, where all the traffic management, security etc., is on this side.

So the developer doesn't have to worry about them, it's done for them. While actually, it doesn't also need a proxy and what it helps with in a large, it helps with 2 points:

  1. It improves latency, because going through the proxy. As latency, it's not that important for regular application, it is super important for latency critical application that measure every millisecond and less than millisecond, as well, it's important for a very large deployment.

So if you think about Gmail, or something like that some big application or customers that big that if you put proxy in each hope and you have this 100s of 1000s of micro services, your daily job will be restarting proxies.

So the proxy lists solve the problem of Lifecycle Management of proxies also. So the way we've done it, we actually allow them to coexist. So in a Single Service Mesh, you can have both proxies [phonetic 17:19] and proxy.

Justin Dorfman 17:21

So it's not just like some big corn job. It's actually like, we're optimized.

Tzury Bar Yochay 17:26

This is mind blowing JD, this is my blowing issue, I'm telling you. What's going on right now.

Anna Berenberg 17:31

So, because API's are the same, you're going to get a feature parity, not full feature parity, because obviously, a proxy is--- By definition, we can have more functionality, because it's out of the process, it can restart independently and if it crashes, the process, doesn't crash itself, etc.

Tzury Bar Yochay 17:54

I know one of the things that actually scares me, sometimes literally keeps me up at night, is that we live in a world of; we know that hardware is less bugs than software. That's a fact and there are reasons for it, logic gates, etc.

Anna Berenberg 17:54

So we have to be lots more careful than [unintelligible 17:56] but it indeed allows for coexistence of applications that requires super high latency requirement, a super low latency requirement, as well as, cannot afford to have so many proxies in deployment.

Tzury Bar Yochay 18:34

Software, as they put it out in understand orbits [phonetic 18:37] at a time, software is eating the world. Now software is everywhere. Security defined by software. Networking is defined by software, even proxy and proxy lists are software defined, transportation and routing and so on, and then we all do some amazing tech stack he just described. How can we sleep at night, knowing that software, which not to be-- You know that every software has bugs, and those bugs can easily be exploited, taken advantage by malicious entities to do all those attacks and breaches and so on? How do we make software more secure, especially critical pieces of actually handling infrastructure? Where are the guarantees the gatekeepers you put in place in your work and your team work, day to day to ensure robustness and safety and security?

Anna Berenberg 19:38

Yes, this is an excellent question and that's where security and reliability come together and one of the things that we've done, let's say at Google for Envoy, re-founded the whole team that is responsible for security or Envoy and the focus on reliability for us, it's very important. That Envoy doesn't have exploits that, it will never be crashed because we're using it for as our as foundation for our product. That's one thing.

Another thing is in general culture of the teams and I think Google has a culture very much tilted towards reliability. People think about reliability when they design systems, when they hold it up. There are a lot of tools to make sure that testing is properly done. There is a fuzzing tools. Fuzzing is required for interfaces. There are all sorts of paranoia that is healthy paranoia that is being deployed and required when you build infrastructure, and what to say that reliability is more important than any functionality we can add.

In some cases, we can think that the integrated just because we wanted to make it more reliable. So it's the integration is not necessarily comes as the features that customers see how reliable we make the systems for the cost.

Tzury Bar Yochay 21:24

Can we talk a little bit more about Envoy in--- I'm not sure how far can you go in terms of things which are proprietary within Google, because it's not a secret that Google has its own flavors? Floros [phonetic 21:38] is to be a flavor of hungry now it's probably, I'm assuming flavors of fan boy when we started curious fans, we looked at OK, what platform do we want hooked on as a first iteration of the product of this solution, and we picked Envoy not being aware of the very fact products, Google Cloud products, and apparently AWS and Azure, they are all fallowing Google as usual and rewriting they're on cloud stock on top of Envoy.

So Envoy to some extent, if not yet will soon become the operating system, sort of the operating platform of the cloud networking application networking. Now, there is a barrier though, Envoy in terms of having it that in one end build the architecture, you mentioned that he's providing API and extensibility in first place.

So, Envoys on feature as we had snow one of the maintainer so far we explained to us that. When they discuss a feature of Envoy they first look; can we implement the feature as an extensibility, using the Envoy core API [phonetic 22:54], and very rarely, they will change the core. In most of the cases, Envoy development is done as Envoy extensions, this is how we have built and extend and expand.

So these very, I would say, extendibility, multiple API and extension need to take into account in order to get started, provide some sort of barrier to beginners to begin with but on the other end, it's like using the Veeam as an editor, if you go with the first day or two of the hustle, then you don't give up, then you find yourself in heaven. In a peaceful mind, you know, you get to the right place.

So, what are things that you doing, I would say, with a single girl are probably also outside of Google, having involving Open Source related products to make the entry point easier for newcomers, for developers, who come from different platform, different technology to adopt new technologies.

Anna Berenberg 24:04

It's actually a very interesting point about core versus extensions which brings us back to the question of reliability. The reason that the core is not kept very limited, is to ensure stability, reliability, and security of the core and so let's say Google doesn't need all this extensions, then it can compile them out, it doesn't compile them in, has core and then they can guarantee quality. Now everybody needs extensions because Envoy is not just approximate platform, as he said each of all the functionalities built as an extension.

One of the things I think is going to simplify in the future onboarding and extensibility.

There are two things:

  1. One is standardized interfaces for remote filters. There is one type of authorization remote filters, like extra amount of Z that has predefined API. So for that you weren't going to need to touch Envoy proxy all together, you can run your filter collocated if it's authorization filter, so you can build the services based on that.
  2. And the other one that is now in development from Google Developers actually, is called External Processing.

So that one also is a gRPC callout from proxy and I don't call out to complicated service and this service can actually do a modification of request, if needed. So that would allow onboarding of people without ever touching a proxy itself, the only thing that does need to change is a configuration.

So the configuration is given by control plane to proxy and then the proxy will call out remote services. This is like totally mind boggling.

And the second option is Wasm proxy, that will allow developers to actually compile their code all together independently, the ABI, are going to be standardized also. So it's going to be a standard way of getting the data in and getting the data out and that's it. So I think you rightfully saw yourself that usability should be a concern, because more and more people using Envoy to implement-- Again, SDN, right? It's all about value added services on top of this platform.

Justin Dorfman 26:40

Speaking of extensions and platforms, when is Google going to adopt Curiefence? I mean, we have a thriving community and you know, we have room for one more organization to start using Curiefence. [crosstalk 26:55] Oh, come on.

Richard Littauer 27:00

You're going to scare away all our guests.

Tzury Bar Yochay 27:05

This is my KPI. This is [crosstalk 27:11]

Anna Berenberg 27:11

The interesting conversation to have is how Envoy as a platform can allow coexistence of a functionality, in some cases, could be competing, in some cases complimentary, how can they build the offering, where the consumer can decide I want some functionality from this product for Cloud Provider Native Offering versus I want to add on, like, in this case, Google Cloud has a product, which is in the same space as Curiefence, which is called Cloud Armor. So it's an interesting proposition to develop an ecosystem like that.

Tzury Bar Yochay 27:53

Justin, you were not aware of but you should be aware at this point that Anna is a great inspiration on us while we were in the very early days of Curiefence. I had a call with Anna project that probably will follow you up on this podcast app and Matt Klein, that when we discuss actually cure Fest, 5g, its architecture around boy and so on and so forth. So the contribution from Google is given already JD on a day to day,

Justin Dorfman 28:22

[crosstalk 28:23] You know what, I don't know that and it should be probably in our about us, because that's all I know, I joined two months after the release, and I was half joking. I mean, obviously, we would love it. But no, I wasn't trying to put you on the spot or anything.

Richard Littauer 28:44

Justin, You're ridiculous. Alright, here is a follow up. So you do amazing work at Google, you lead many technical leads and you work really hard to make sure that this work is happening and a lot of what you're doing has been groundbreaking, which is awesome. A lot of the stuff is Open Source, gRPC is Open Source, which is really great. Anyone can get involved and when you look at the code, I mean, part of the standards and the principles for gRPC available, gRPC.io. is free and open, allow anyone to look at what's there.

My question is how you incentivize clients, communities, coders, developers, projects, products, like Curiefence to get involved in the planning stage, in the stage where you're working, because Google is doing all this work at the top strategizing and it's easy to say it's Open Source at the bottom. It's easy to say, Well, we've made it open source. Go ahead and use it if you want. But it's much harder to integrate feedback from the community at the stage where it's how we're building all of this stuff and strategizing our knowledge base. So what are you doing? What are your team's doing to make sure that the needs of your offense and other projects like your listeners might have are taking into account I'd like to very high level?

Anna Berenberg 29:38

Well, we always listening, I think listening is under appreciated activity and it's important to listen and important to understand, not a single requirement, but a collection of requirements because when you have requirements from multiple people, then you're not looking at solving upon problem, you are looking at solving an actually generic problem.

So we always listening and being attentive to people who both contribute to Open Source as well to our customers and interestingly enough, our customers also contributors to Open Source.

So this creates an ecosystem in which we provide value added services on top of Open Source, while our customers take advantage of our value added proposition, they actually improve Open Source offerings as well and they make them more acceptable to them what especially on the consumption side, are they going to produce Kubernetes operators then configure whatever we want, whatever they need, while we providing GCP API's.

There are a lot of this collaboration, natural organic collaboration happening between Open Source community, that let's say a lot of our customers and the community that are our customers, as well as customers who actually are not very much interested in Open Source, but they will use Cloud Native. So you have a spectrum of customers, and all you need is to listen, to understand what they're missing.

Richard Littauer 31:46

I love that answer. Thank you. We are coming up on time. So one of the questions I have Oh, aren't we started? I want to keep going, she's fine. Okay, well, then go ahead and edit your question.

You know, Paul, keep that in, I want it to be real and show that we want to keep going with Anna? [crosstalk 32:05] Something Anna, I don't want you to like-- Well, I want to abuse this privilege.

Anna Berenberg 32:14

You all funny.

Richard Littauer 32:17

See, Mom, I'm funny.

Justin Dorfman 32:19

So my final question was, where can people listen to you? And do you have any final thoughts and if not, where can people find your final thoughts elsewhere? Your other thoughts on this sort of stuff, do you have a Twitter account blog?

Anna Berenberg 32:31

I want to think of myself as a person anybody has to listen.

Justin Dorfman 32:38

In fact, listening is an under estimated activity that is from you.

Richard Littauer 32:43

Also, I think this was really interesting to listen to you have such a depth of knowledge and your ease of explanation really shows that you're able to take these really complex stuff and just say, 'Yes, here's how we do it. It's pretty cool and it's like, really great. So you're definitely someone worth listening to.

Anna Berenberg 33:00

Yes, so I have to return. I'm posting some of it. I have LinkedIn account, but now I don't have a blog.

Richard Littauer 33:07

That's okay. What's your Twitter account?

Anna Berenberg 33:09

It's [unintelligible 3:10] it's K-N-I-G-A and it's from Russian. It's book in Russian.

Richard Littauer 33:19

I love that. Yeah. Well, thank you so much for coming on this podcast. It was really great and I really appreciate you sharing your knowledge. I'm thinking about the etymology of Envoy because Envoy means messenger and as we all know; the Roman god of messengers was also the Roman god of flowers. So we talk of flavors, we could also think of a bouquet of different Envoy things and so I was trying to show like something around this was just really great and now I feel like I've walked through a garden of beautiful flowers. So thank you so much.

Justin Dorfman 33:52

I love how Richard you go. Everyone knows. I don't know and now [laughter] I always learned something like either with the linguist or a couple of new words that you define. It's a very interesting, not only do I learned about committing to Cloud Native, I also get to learn about words, big words with Richard.

Richard Littauer 34:11

Thank you, so much.

Anna Berenberg 34:13

Thank you.

Richard Littauer 34:14

Thank you.

Special Guest: Anna Berenberg.

Sponsored By:

  continue reading

25 bölüm

Tüm bölümler

×
 
Loading …

Player FM'e Hoş Geldiniz!

Player FM şu anda sizin için internetteki yüksek kalitedeki podcast'leri arıyor. En iyi podcast uygulaması ve Android, iPhone ve internet üzerinde çalışıyor. Aboneliklerinizi cihazlar arasında eş zamanlamak için üye olun.

 

Hızlı referans rehberi