Artwork

İçerik Grant Larsen tarafından sağlanmıştır. Bölümler, grafikler ve podcast açıklamaları dahil tüm podcast içeriği doğrudan Grant Larsen veya podcast platform ortağı tarafından yüklenir ve sağlanır. Birinin telif hakkıyla korunan çalışmanızı izniniz olmadan kullandığını düşünüyorsanız burada https://tr.player.fm/legal özetlenen süreci takip edebilirsiniz.
Player FM - Podcast Uygulaması
Player FM uygulamasıyla çevrimdışı Player FM !

FIR 139: The Data Privacy 2-STEP For YOUR AI !!

18:39
 
Paylaş
 

Manage episode 316176139 series 1410522
İçerik Grant Larsen tarafından sağlanmıştır. Bölümler, grafikler ve podcast açıklamaları dahil tüm podcast içeriği doğrudan Grant Larsen veya podcast platform ortağı tarafından yüklenir ve sağlanır. Birinin telif hakkıyla korunan çalışmanızı izniniz olmadan kullandığını düşünüyorsanız burada https://tr.player.fm/legal özetlenen süreci takip edebilirsiniz.

In this episode, we take a look at some simple steps to protect the privacy of the data for your AI.

Welcome everybody to another episode of click AI radio. Well, certainly data privacy has been on the minds of a lot of people and organizations and governments and governments and institutions, and so forth. No surprise there. One of the things though about AI is that, in general, it's not spent as much time if you will, putting a focus on that area. And it's been a bit of a problem and will become more of a problem if we don't do something about it. As we work with our application and use of AI itself. Now I was looking at several different groups and what they talked about and what they felt about it. At the end of this episode, I'm going to throw out two steps that I have seen that help us mitigate the challenges around this right to help to prevent some of the challenges of slippage, if you will, of getting the privacy of people's information out there that that shouldn't be now to frame that up. I want to introduce a framework for it. One of the one of the blogs, I looked at those, it was called beware the privacy violations and artificial intelligence applications it came from comes from is a see if I can say that is aca.org. There you go. A blog from there. Anyway, here, I'm gonna read an interesting quote here, right said, Look, artificial intelligence has been no different when seen through a privacy by lens design lens, as privacy has not been top of mind in the development of AI technologies. Yeah. All right. So end quote there.

I agree with that that has been true. In fact, a lot of our efforts have been, can we just prove the viability of this technology in terms of helping people, individuals, businesses, certainly, there's been success with AI. And there's been some challenges. Now, they what what's introduced here is three interesting pieces to consider. When we're looking at privacy, one of those has to do with what's called data persistence. All right, so put your nerd hat on gonna be nerdy here for a moment, data persistence, that means the data existing today will last longer than the human subjects that created it. All right. And that's of course, driven by things like low data storage costs, and all the technologies that are available to allow our data to live a lot longer than us as people. So that creates a potential privacy challenge. There's another data privacy challenge. It's called Data repurposing, and that is, data that was originally created then gets used in ways that is beyond what it was originally intended. And AI is data hungry, we'll use that up, suck that up. Alright. And the third sort of area around privacy includes what's called a data spillover. And here's where, and this happens a bit in it actually drove a lot of the GDPR stuff, right, which is data collected on people who were not the target of data collection. And so of course, driving out of it was things like, you know, GDPR, out of Europe, certainly CCPA out of California, all of those things drive to or point to the need for having some regulation around it.

Now, it's one thing to have regulation that's entirely different, to enforce it. And some of that comes upon us as business owners, it means that there are few things that we can and must do in order to protect the privacy of people's data and their information while still delivering value from AI. And that's that's certainly the balance that we're going for. One of the primary concerns, of course, with AI is its ability to replicate or reinforce or even amplify harmful biases, right? And this is a challenge because those biases can proliferate and then end up driving insights and and recommendations and predictions that of course, take you know, are wrong, right have have this human bias in it, there's there's another challenge to that we have with AI, let's say that we're going to try to fix that or solve for that. One of the problems, though, is that a lot of our auditing methods used today are based on the fact that something has already occurred, meaning in this case, with AI, that makes it even more difficult, right? Because it means that I've created an AI model, and I'm starting to then employ recommendations, insights and decisions, ways in which I work with people or deliver solutions, all with the incorporated bad behavior already. So it's kind of late, that doesn't mean that we shouldn't do the audits on the data. But they're post deployment by nature, meaning I've, I've already deployed it. So what we have to figure out is to find a balance, right, with privacy, as well as AI progress, it's finding the ability to say I'm gonna, I'm going to grow my my data usage in a way that that protects the privacy of the individuals involved. But I also need to allow AI to move forward and that right, there is, of course, a challenge that we're looking to pursue it some groups use consent today, right? It's a get someone to consent that they you know, you can be happy to share your information with them.

That has some challenges with that those consents, not always as powerful as a tool as we might believe. And there's been examples where consent has been still misappropriated, right, what we thought originally what people thought originally was consent for certain use, then there was spill over into other areas, meaning people, you know, people didn't know their data was being used by AI for other purposes. So again, even though consent might be there, and organizations or people are well, intending, controlling, controlling the the, you know, the the boundaries of the consent, and enforcing that still is a real challenge, and relies on a lot of people to manually handle that, which means that there's more opportunities for us to mess up. I was looking at a report from the Brookings Institute, they were talking about AI Governance Initiative. And most interesting, there's this is some legislation that was pursuing this balance of how to pass privacy legislation, while while still allowing for AI to do the kind of work that it needs to bring about some of the benefits to humanity that we feel that we can do with this with this awesome technology. One of the techniques that that is mentioned here in this Brookings Institute report was, and you've heard it before, is all around what's called algorithm clarity, right? It's having clarity on how the algorithms using your information, right, that seems to be a useful piece to help dealing with it. But one of the problems with that is, you know, as an SMB owner, is that it ends up giving though, the burden typically on the backs of the of the SMB owner to say two things, one, I have to I have to make things transparent, so that my customers are aware that some aspects of their business information is leveraged by AI, right. And so that's, that's sort of the first incumbency is to say, I'm going to tell you, here's, here's what, here's what we're going to do with your data, it's going to be, you know, used in AI, and to draw the line on what information is not used. Okay? So that's what kind of comes out of this, right? It's, it's a, it's an activity or, you know, being forthright with, with our customers, on on what the intended use of the information that will or will not be used right in to draw that line. That's the first sort of consideration in algorithmic clarity. The second consideration, though, is explainability. Right. So that's, again, where you let your customers know what kind of algorithms are being used.

Right. And this may include, you know, access to a human to provide that clarity, you know, okay. The thing I struggle with this, and of course, I that's all well and good. The thing I sometimes get challenged with here is what the heck does it mean to have one of the people in your team explain? Oh, yeah, you know, we use linear regression, or we use a particular classification model or, you know, a Bayes model, right, etc. We used all these different machine learning models and algorithms to that's what good is that right? 99% of people are gonna be like, What are you talking about? How did that help me understand any better? So this area of explainability, right, so the transparency Hey, we're going to use this kind of information or Not this kind of information in terms of the AI. So being clear with your customers on that. And then number two, coming up with a well Set Description for the kinds of algorithms being used, the, here's us maybe a simple way to think about it, you can break it into two buckets, right? When you're trying to explain AI to your customer say, we're going to be using this set of data for our AI algorithms. But we want to let you know that there's sort of two major areas, right. And what I'm gonna say here doesn't apply to all of AI, but for sort of vague, you know, AI for analytics, then then this applies. And it can be simply this. For those kinds of AI problems, where we're trying to determine yes or no answers, then we use what's called classification models, right? So it will be good for us to sell you this product or that product, yes or no? All right, classification kinds of problems. Then there's the other kind of problem, which is we'll use some AI algorithms to help us know, what might be the right price range, right? Now, of course, those are called more regressions, style algorithms. But you don't need to say that. It's simply we're going to use AI to help us understand yes, or no kinds of answers or questions to, you know, answer the questions. And the others, we're going to use it to understand, you know, proper pricing, perhaps right? Things that are not necessarily yes or no, but but degrees of difference, right? Those are two major buckets. And we can work on developing pretty simple language to explain this stuff. Otherwise, what good is it right if people can't, can't get it? Alright, I want to just point out one other thing here. So let me just summarize. So there's two things I think that an SMB can do to apply AI.

Alright, so and to help solve for this problem of data privacy, after looking at and applying AI in multiple situations, or excuse me, over many years, come down to these two, two steps. They're a bit overly simplified, but I still think that if you print these out, put it on your wall, it could save you some real pain. Alright, so here's the first one. The first one is where I've seen a fair amount of pain. And this will sound really overly simple. First one is this. Start your AI journey with vetted questions and data oversight. Like what? Alright, I'll I'll come back to that. Number two, apply AI using smart steps. What? Alright, I'll come back to that. Awesome. Alright, so let me talk about number one here for a moment. So uh, number one here starting your AI journey with vetted questions and data oversight. It seems like an obvious first step, but when you examine the case studies of AI failures empirically, it looks like you know, this step has either been skipped or was not given the proper waiting look, a key technique here is to first vet what while you're doing this is to first bet by leveraging an independent party or going under NDA with someone, but what you want to challenge is the intended question you're trying to address with AI? Right, some of the biggest missteps with AI has been? No, that was the wrong question to be asking, this is the wrong use case to be pursuing right? There was inherently something that was either, you know, prone to lots of bias or was an unethical use of AI. Right. So in this step, the anticipated questions for AI. I know that sounds so simple, but it should be written down and evaluated in the context of the impact to your customers, and to other interested parties and to humanity for crying out loud, right? Just stop and do that simple stuff. I know, it sounds so doggone obvious. But alright, now, what does it mean to vet your AI questions? Well evaluate the AI implications to your customers, as well as look at some of those three elements that I introduced earlier. In other words, hey, will there be data persistence? What does it mean for the data that I'll be collecting to be in existence longer than the humans that created it? Right? What does that do in terms of in terms of privacy impacts or data repurposing? Wait a minute, are we going to be using the data beyond its originally you know, imagined purpose? And if so, you know, what obligation do we have to the people of all and number three data spillovers, right? Ask the question. Wait, are we collecting data on people that were not targeted those with whom we you know, initially intended? So stop and ask do I have the right question and then And then also, what's the impact of these two these three AI privacy areas of data persistence, repurposing, and data spillovers? All right, that's step one, stop and do a little vetted questioning, right and data oversight before you get too far. All right, number two is this. This in this one I call applying AI using smart steps. What this means is to iterate on the AI model and continue to refine and refactor and rebuild it as more as learned. What that does is it allows us to even adjust our AI models that might have some bias that we discover, right? So what it means is to build your vetted model, learn from experience, and then evaluate the impact to your business, your customers and then iterate.

Right. So there's a book that came out not too long ago, it's by Bernard Marr. It's called "Artificial Intelligence In Practice". And what he's got in there is he's got 50 company use cases where AI was applied. Now I just want to pull something from that book, that's interesting. He goes, in that book, there's this one use case about Alibaba in China, right, who ultimately built a virtual platform that mimicked customer behaviors. And one of the reasons they did it was because it would take too long and too much effort to continually refactor their system. And so this virtual platform is used to allow the AI to continue to be refined and rebuilt and refactor. Now, you know, to do you know, to do lots of model rebuilding, in some situations, right, that's really heavy effort to do. So you either gonna have to put in the extra effort upfront to really ensure that you got the data privacy problem solved for or it well do that. And if you're able to then do the smart steps, which I found really helpful, which is, take the model Build it, try to apply it look for where a bias might be exposed, or privacy might be exposed when you didn't expect it. The lesson is this. Adjust your mindset as a business owner to refine your AI model over time, right take into consideration the changes in context, the changes in the economy, as well as lessons learned. And put in your mind that part of doing AI means that that will you know, continue to refine and improve and rebuild this AI model. Now when you combine these two steps, right, this first step is the vetted privacy aware questioning, right and looking at your data, as well as the mindset of smart steps where you simply refactor and approve you and model over time. What I found is that if you do those two things, you're in a much better position. For better privacy AI data privacy considerations. It puts you on a great path for both near term as well as long term viable business impact, you know, to your organization. Alright, everybody, thanks for joining and until next step. Until next time, use the two steps to ensure privacy for your AI that brings incremental business growth.

Thank you for joining Grant on ClickAI Radio. Don't forget to subscribe and leave feedback. And remember to download your free ebook, visit ClickAIRadio.com now.

  continue reading

159 bölüm

Artwork
iconPaylaş
 
Manage episode 316176139 series 1410522
İçerik Grant Larsen tarafından sağlanmıştır. Bölümler, grafikler ve podcast açıklamaları dahil tüm podcast içeriği doğrudan Grant Larsen veya podcast platform ortağı tarafından yüklenir ve sağlanır. Birinin telif hakkıyla korunan çalışmanızı izniniz olmadan kullandığını düşünüyorsanız burada https://tr.player.fm/legal özetlenen süreci takip edebilirsiniz.

In this episode, we take a look at some simple steps to protect the privacy of the data for your AI.

Welcome everybody to another episode of click AI radio. Well, certainly data privacy has been on the minds of a lot of people and organizations and governments and governments and institutions, and so forth. No surprise there. One of the things though about AI is that, in general, it's not spent as much time if you will, putting a focus on that area. And it's been a bit of a problem and will become more of a problem if we don't do something about it. As we work with our application and use of AI itself. Now I was looking at several different groups and what they talked about and what they felt about it. At the end of this episode, I'm going to throw out two steps that I have seen that help us mitigate the challenges around this right to help to prevent some of the challenges of slippage, if you will, of getting the privacy of people's information out there that that shouldn't be now to frame that up. I want to introduce a framework for it. One of the one of the blogs, I looked at those, it was called beware the privacy violations and artificial intelligence applications it came from comes from is a see if I can say that is aca.org. There you go. A blog from there. Anyway, here, I'm gonna read an interesting quote here, right said, Look, artificial intelligence has been no different when seen through a privacy by lens design lens, as privacy has not been top of mind in the development of AI technologies. Yeah. All right. So end quote there.

I agree with that that has been true. In fact, a lot of our efforts have been, can we just prove the viability of this technology in terms of helping people, individuals, businesses, certainly, there's been success with AI. And there's been some challenges. Now, they what what's introduced here is three interesting pieces to consider. When we're looking at privacy, one of those has to do with what's called data persistence. All right, so put your nerd hat on gonna be nerdy here for a moment, data persistence, that means the data existing today will last longer than the human subjects that created it. All right. And that's of course, driven by things like low data storage costs, and all the technologies that are available to allow our data to live a lot longer than us as people. So that creates a potential privacy challenge. There's another data privacy challenge. It's called Data repurposing, and that is, data that was originally created then gets used in ways that is beyond what it was originally intended. And AI is data hungry, we'll use that up, suck that up. Alright. And the third sort of area around privacy includes what's called a data spillover. And here's where, and this happens a bit in it actually drove a lot of the GDPR stuff, right, which is data collected on people who were not the target of data collection. And so of course, driving out of it was things like, you know, GDPR, out of Europe, certainly CCPA out of California, all of those things drive to or point to the need for having some regulation around it.

Now, it's one thing to have regulation that's entirely different, to enforce it. And some of that comes upon us as business owners, it means that there are few things that we can and must do in order to protect the privacy of people's data and their information while still delivering value from AI. And that's that's certainly the balance that we're going for. One of the primary concerns, of course, with AI is its ability to replicate or reinforce or even amplify harmful biases, right? And this is a challenge because those biases can proliferate and then end up driving insights and and recommendations and predictions that of course, take you know, are wrong, right have have this human bias in it, there's there's another challenge to that we have with AI, let's say that we're going to try to fix that or solve for that. One of the problems, though, is that a lot of our auditing methods used today are based on the fact that something has already occurred, meaning in this case, with AI, that makes it even more difficult, right? Because it means that I've created an AI model, and I'm starting to then employ recommendations, insights and decisions, ways in which I work with people or deliver solutions, all with the incorporated bad behavior already. So it's kind of late, that doesn't mean that we shouldn't do the audits on the data. But they're post deployment by nature, meaning I've, I've already deployed it. So what we have to figure out is to find a balance, right, with privacy, as well as AI progress, it's finding the ability to say I'm gonna, I'm going to grow my my data usage in a way that that protects the privacy of the individuals involved. But I also need to allow AI to move forward and that right, there is, of course, a challenge that we're looking to pursue it some groups use consent today, right? It's a get someone to consent that they you know, you can be happy to share your information with them.

That has some challenges with that those consents, not always as powerful as a tool as we might believe. And there's been examples where consent has been still misappropriated, right, what we thought originally what people thought originally was consent for certain use, then there was spill over into other areas, meaning people, you know, people didn't know their data was being used by AI for other purposes. So again, even though consent might be there, and organizations or people are well, intending, controlling, controlling the the, you know, the the boundaries of the consent, and enforcing that still is a real challenge, and relies on a lot of people to manually handle that, which means that there's more opportunities for us to mess up. I was looking at a report from the Brookings Institute, they were talking about AI Governance Initiative. And most interesting, there's this is some legislation that was pursuing this balance of how to pass privacy legislation, while while still allowing for AI to do the kind of work that it needs to bring about some of the benefits to humanity that we feel that we can do with this with this awesome technology. One of the techniques that that is mentioned here in this Brookings Institute report was, and you've heard it before, is all around what's called algorithm clarity, right? It's having clarity on how the algorithms using your information, right, that seems to be a useful piece to help dealing with it. But one of the problems with that is, you know, as an SMB owner, is that it ends up giving though, the burden typically on the backs of the of the SMB owner to say two things, one, I have to I have to make things transparent, so that my customers are aware that some aspects of their business information is leveraged by AI, right. And so that's, that's sort of the first incumbency is to say, I'm going to tell you, here's, here's what, here's what we're going to do with your data, it's going to be, you know, used in AI, and to draw the line on what information is not used. Okay? So that's what kind of comes out of this, right? It's, it's a, it's an activity or, you know, being forthright with, with our customers, on on what the intended use of the information that will or will not be used right in to draw that line. That's the first sort of consideration in algorithmic clarity. The second consideration, though, is explainability. Right. So that's, again, where you let your customers know what kind of algorithms are being used.

Right. And this may include, you know, access to a human to provide that clarity, you know, okay. The thing I struggle with this, and of course, I that's all well and good. The thing I sometimes get challenged with here is what the heck does it mean to have one of the people in your team explain? Oh, yeah, you know, we use linear regression, or we use a particular classification model or, you know, a Bayes model, right, etc. We used all these different machine learning models and algorithms to that's what good is that right? 99% of people are gonna be like, What are you talking about? How did that help me understand any better? So this area of explainability, right, so the transparency Hey, we're going to use this kind of information or Not this kind of information in terms of the AI. So being clear with your customers on that. And then number two, coming up with a well Set Description for the kinds of algorithms being used, the, here's us maybe a simple way to think about it, you can break it into two buckets, right? When you're trying to explain AI to your customer say, we're going to be using this set of data for our AI algorithms. But we want to let you know that there's sort of two major areas, right. And what I'm gonna say here doesn't apply to all of AI, but for sort of vague, you know, AI for analytics, then then this applies. And it can be simply this. For those kinds of AI problems, where we're trying to determine yes or no answers, then we use what's called classification models, right? So it will be good for us to sell you this product or that product, yes or no? All right, classification kinds of problems. Then there's the other kind of problem, which is we'll use some AI algorithms to help us know, what might be the right price range, right? Now, of course, those are called more regressions, style algorithms. But you don't need to say that. It's simply we're going to use AI to help us understand yes, or no kinds of answers or questions to, you know, answer the questions. And the others, we're going to use it to understand, you know, proper pricing, perhaps right? Things that are not necessarily yes or no, but but degrees of difference, right? Those are two major buckets. And we can work on developing pretty simple language to explain this stuff. Otherwise, what good is it right if people can't, can't get it? Alright, I want to just point out one other thing here. So let me just summarize. So there's two things I think that an SMB can do to apply AI.

Alright, so and to help solve for this problem of data privacy, after looking at and applying AI in multiple situations, or excuse me, over many years, come down to these two, two steps. They're a bit overly simplified, but I still think that if you print these out, put it on your wall, it could save you some real pain. Alright, so here's the first one. The first one is where I've seen a fair amount of pain. And this will sound really overly simple. First one is this. Start your AI journey with vetted questions and data oversight. Like what? Alright, I'll I'll come back to that. Number two, apply AI using smart steps. What? Alright, I'll come back to that. Awesome. Alright, so let me talk about number one here for a moment. So uh, number one here starting your AI journey with vetted questions and data oversight. It seems like an obvious first step, but when you examine the case studies of AI failures empirically, it looks like you know, this step has either been skipped or was not given the proper waiting look, a key technique here is to first vet what while you're doing this is to first bet by leveraging an independent party or going under NDA with someone, but what you want to challenge is the intended question you're trying to address with AI? Right, some of the biggest missteps with AI has been? No, that was the wrong question to be asking, this is the wrong use case to be pursuing right? There was inherently something that was either, you know, prone to lots of bias or was an unethical use of AI. Right. So in this step, the anticipated questions for AI. I know that sounds so simple, but it should be written down and evaluated in the context of the impact to your customers, and to other interested parties and to humanity for crying out loud, right? Just stop and do that simple stuff. I know, it sounds so doggone obvious. But alright, now, what does it mean to vet your AI questions? Well evaluate the AI implications to your customers, as well as look at some of those three elements that I introduced earlier. In other words, hey, will there be data persistence? What does it mean for the data that I'll be collecting to be in existence longer than the humans that created it? Right? What does that do in terms of in terms of privacy impacts or data repurposing? Wait a minute, are we going to be using the data beyond its originally you know, imagined purpose? And if so, you know, what obligation do we have to the people of all and number three data spillovers, right? Ask the question. Wait, are we collecting data on people that were not targeted those with whom we you know, initially intended? So stop and ask do I have the right question and then And then also, what's the impact of these two these three AI privacy areas of data persistence, repurposing, and data spillovers? All right, that's step one, stop and do a little vetted questioning, right and data oversight before you get too far. All right, number two is this. This in this one I call applying AI using smart steps. What this means is to iterate on the AI model and continue to refine and refactor and rebuild it as more as learned. What that does is it allows us to even adjust our AI models that might have some bias that we discover, right? So what it means is to build your vetted model, learn from experience, and then evaluate the impact to your business, your customers and then iterate.

Right. So there's a book that came out not too long ago, it's by Bernard Marr. It's called "Artificial Intelligence In Practice". And what he's got in there is he's got 50 company use cases where AI was applied. Now I just want to pull something from that book, that's interesting. He goes, in that book, there's this one use case about Alibaba in China, right, who ultimately built a virtual platform that mimicked customer behaviors. And one of the reasons they did it was because it would take too long and too much effort to continually refactor their system. And so this virtual platform is used to allow the AI to continue to be refined and rebuilt and refactor. Now, you know, to do you know, to do lots of model rebuilding, in some situations, right, that's really heavy effort to do. So you either gonna have to put in the extra effort upfront to really ensure that you got the data privacy problem solved for or it well do that. And if you're able to then do the smart steps, which I found really helpful, which is, take the model Build it, try to apply it look for where a bias might be exposed, or privacy might be exposed when you didn't expect it. The lesson is this. Adjust your mindset as a business owner to refine your AI model over time, right take into consideration the changes in context, the changes in the economy, as well as lessons learned. And put in your mind that part of doing AI means that that will you know, continue to refine and improve and rebuild this AI model. Now when you combine these two steps, right, this first step is the vetted privacy aware questioning, right and looking at your data, as well as the mindset of smart steps where you simply refactor and approve you and model over time. What I found is that if you do those two things, you're in a much better position. For better privacy AI data privacy considerations. It puts you on a great path for both near term as well as long term viable business impact, you know, to your organization. Alright, everybody, thanks for joining and until next step. Until next time, use the two steps to ensure privacy for your AI that brings incremental business growth.

Thank you for joining Grant on ClickAI Radio. Don't forget to subscribe and leave feedback. And remember to download your free ebook, visit ClickAIRadio.com now.

  continue reading

159 bölüm

Tüm bölümler

×
 
Loading …

Player FM'e Hoş Geldiniz!

Player FM şu anda sizin için internetteki yüksek kalitedeki podcast'leri arıyor. En iyi podcast uygulaması ve Android, iPhone ve internet üzerinde çalışıyor. Aboneliklerinizi cihazlar arasında eş zamanlamak için üye olun.

 

Hızlı referans rehberi