Artwork

İçerik Asim Hussain and Green Software Foundation tarafından sağlanmıştır. Bölümler, grafikler ve podcast açıklamaları dahil tüm podcast içeriği doğrudan Asim Hussain and Green Software Foundation veya podcast platform ortağı tarafından yüklenir ve sağlanır. Birinin telif hakkıyla korunan çalışmanızı izniniz olmadan kullandığını düşünüyorsanız burada https://tr.player.fm/legal özetlenen süreci takip edebilirsiniz.
Player FM - Podcast Uygulaması
Player FM uygulamasıyla çevrimdışı Player FM !

Responsible AI

19:40
 
Paylaş
 

Manage episode 390698491 series 3336430
İçerik Asim Hussain and Green Software Foundation tarafından sağlanmıştır. Bölümler, grafikler ve podcast açıklamaları dahil tüm podcast içeriği doğrudan Asim Hussain and Green Software Foundation veya podcast platform ortağı tarafından yüklenir ve sağlanır. Birinin telif hakkıyla korunan çalışmanızı izniniz olmadan kullandığını düşünüyorsanız burada https://tr.player.fm/legal özetlenen süreci takip edebilirsiniz.
From the recent Decarbonize Software 2023 event, this episode showcases a fireside chat on Responsible AI with Tammy McClellan from Microsoft and Jesse McCrosky from ThoughtWorks. Jesse shares his thoughts and experiences from years of working in the field of Sustainable Tech on the topics of risks, sustainability, and more regarding AI, before answering some questions from the audience.

Learn more about our people:

Find out more about the GSF:

Events:

Resources:

If you enjoyed this episode then please either:
Connect with us on Twitter, Github and LinkedIn!
TRANSCRIPT BELOW:
Asim Hussain:
Hello, and welcome to Environment Variables brought to you by the Green Software Foundation. In each episode, we discuss the latest news and events surrounding green software. On our show, you can expect candid conversations with top experts in their field who have a passion for how to reduce the greenhouse gas emissions of software.
Chris Skipper: Welcome to Environment Variables. Today, we've got another highlight from the recent Decarbonize Software 2023 event. We'll be showcasing the fireside chat on Responsible AI from Jesse McCrosky, Head of Sustainability and Social Change and Principal Data Scientist at ThoughtWorks, and Tammy McClellan, Senior Cloud Solution Architect at Microsoft and Co-Chair of the Community Working Group and Oversight Committee at the Green Software Foundation. They are introduced by our very own Senior Technical Project Manager for open source projects, Sophie Trinder, so it will be her voice that you hear first. So, without further ado, here's the Fireside Chat on Responsible AI.
Sophie Trinder: Hi everyone, I'm Sophie, the technical project manager for our open source projects at the Green Software Foundation. Today I'm going to introduce a special fireside chat to our Decarbonize Software event. We're continuing the conversation that began on the 5th of October at our panel on responsible AI. The conversation surrounding responsible AI is dynamic, oscillating between optimism and skepticism. one side, practitioners believe that AI has the potential to drive sustainable development goals, from responsible consumption to waste management and energy conservation. The promise lies in our improvements in measuring software's environmental impacts, and innovation across energy-efficient algorithms, hardware optimizations, and the growing use of renewable energy sources. On the other side, the rapid expansion of AI, particularly large learning language models, and the insatiable demand for this technology, are raising concerns. If left unchecked, the energy consumption and resource utilization associated with AI make many feel like we're endangering a future where software causes zero harmful environmental impacts. To help us explore the path forward, I'm thrilled to introduce Tammy McClellan, Senior Cloud Solution Architect at Microsoft, and Jesse McCrosky, Head of Responsible Tech and Principal Data Scientist at ThoughtWorks. Thanks, both. Take it away.
Tammy McClellan: Thanks, Sophie. And a hello to all you sustainability addicts. Jesse, hello. Let's start the question with, how do you see the relationship between responsible AI and sustainability?
Jesse McCrosky: Hey Tammy, great question and nice to see you all. So at ThoughtWorks we use a framework that I like which we refer to as the greening of tech and greening by tech and I think this is the best lens through which to view that question. Greening of tech refers to the fact that these systems and especially generative AI as we're talking about now have serious energy consumption, they have serious sustainability issues that need to be tackled.
The other side is greening by tech and recognizes the potential that this technology has to actually improve sustainability of other processes, either within or outside of the tech world. And I think what ties these, these two questions together is issues of transparency and information and ensuring that people have the information they need to make the right decisions for our environment.
Tammy McClellan: I like that, greening of tech and greening for tech. It's my new mantra now. So how can, uh, we use this to make more sustainable solutions?
Jesse McCrosky: So it's a big question. To begin with, I think that I refer to transparency, and when we talk about transparency, a lot of people think that means you share your source code, or you share your model weights, and then you're transparent. Or it means you have to explain the decisions the AI is making, and that's transparency. Transparency is more than that. There's a report I did with the Mozilla Foundation on AI transparency, and we talk about meaningful AI transparency that needs to be legible, auditable, and actionable. And this means that we have to consider the specific stakeholders that the information is being provided to, what are their needs, what are they going to do with this information. So it comes down to the old adage that you can't manage what you can't measure. So for example, in order to support meaningful policy, meaningful regulation, we need to have information about the sustainability characteristics of these systems.
Tammy McClellan: So talk to us a little bit about some possible solutions in this area.
Jesse McCrosky: Yeah, absolutely. So when we're looking at solutions, especially using the kind of transparency lens, we can think about who is the transparency being provided to. So, for example, we can talk about consumers. And right now, consumers are very excited about ChatGPT or whatever else, Stable Diffusion, DALL-E, and everything like that. It's a lot of fun to play with. And they do not have meaningful information about the carbon implications of that play. So someone was suggesting to me that ChatGPT should have a real-time counter across the top somewhere that's telling you how much carbon have you emitted so far in your session, how many, you know, gallons of water have been consumed, whatever else. And it is not a matter of just shaming people, but it's helping people make the right choices, because there might be applications for which ChatGPT is really worthwhile to use, but there's other times that somebody's just idly playing or something like that, and if they realize the implications of doing, they might make other choices. This becomes more interesting when we talk about communication between, for example, model developers and model deployers. So, for example, if somebody is using the OpenAI APIs in their product, they need to be able to have information about what the implications are of those API calls so they can make good choices in how they build their software.
Tammy McClellan: So awareness is key, absolutely. So Jesse, what is the potential for Gen AI to support greater sustainability?
Jesse McCrosky: Yeah, it's an exciting question, and I think there is some potential here. There's a case that ThoughtWorks took, it's a couple years back now, I think, in which we worked with a international manufacturing and services company. They were interested in finding solutions to meet their sustainability goals, and they just weren't sure which way to go.
They weren't sure, "should we start sourcing our energy from a different place, or using different sorts of transportation, or using different industrial processes or offering different products?" And so what we did for them was built a mathematical model of their operations and their supply chains. once we had that mathematical model, we were able to build a sort of scenario modeling dashboard where we could show them like, "hey, if you switch to delivery trucks that are using electricity instead of gas, this is what happens to your emissions, this is what happens to your bottom line, this is what happens to your customers."
And likewise, depending on / considering different product mixes, considering different sourcing, whatever else. So the mathematical model here was not rocket science, to be honest, it was fairly simple stuff. The hard part of this engagement was really understanding the business at the level that we needed to in order to build that model. There were many hours of interviews and poring over notes and internal documents and everything else, as well as actually some basic desk research to determine the necessary carbon emissions factors, that sort of thing. I'm excited at the potential of generative AI to make this sort of process more accessible and more scalable. And I think that we've seen evidence so far that these models do a very good job of looking at these sorts of documents, looking at recordings and interviews, and it may be possible that you could create this model semi automatically with far, far less of the kind of very heavyweight and expensive all sorts of interventions. As well, it was challenging to understand the exchangeability. And so, for example, if the company is buying cotton in one particular country, it might be obvious to us that they can instead buy the same cotton from some other country, and that's the only possible change that could be made. But it's not so simple for the model to figure that sort of thing out automatically.
Whereas GenAI, I think when we connect to these sorts of emissions factors databases, has the potential to make this process much easier.
Tammy McClellan: Yeah, awesome. Let's move a little bit and talk about risks. How do you think businesses can manage the risks of AI?
Jesse McCrosky: Yeah, it's it's a big question. I think everybody's talking about this. And I think what I would say is it's critical to understand that risks must be mitigated, not removed. I think a lot of people are talking, for example, about bias and discrimination, and they say, okay, we're going to produce a model that's perfectly fair and perfectly unbiased, or we're going to eliminate this bias from our model or whatever else. And this is just not the way things work. We live in the real world, and these systems are based on data from the real world. And the real world is unjust, and so we need to be able to be ready to tackle that. So, one example that I like is OpenAI with their DALL-E interface generation system. For a while, maybe some months ago, I think, if you asked it for pictures of lawyers, it was going to give you eight pictures of white men, basically.
And OpenAI recognized that there was a problem there, as did the community, of course. So eventually, OpenAI had a short blog post where they talked about how they were going to fix this. And it was apparently fixed, so when people tried to get pictures, they would see pictures of lawyers, and some of them would be women, and some of them would be of different ethnicities, and everything else. So People were curious how this had been fixed and it turned out that all that OpenAI was doing was just randomly appending words like women or black or Asian or whatever else to these prompts and people were not super impressed with this solution but I think it's an important illustrated example, because it's a mitigation, there was a problem with a model, there was a problem with the data, this is not a problem that can be solved fundamentally, it needed to be mitigated, and they found a way, they said, "here's the harm that's going to come from the system. It's going to not be producing an adequate representation, and we found a way that we can show more representation." So this is the sort of mitigation that companies need to take. So when there's issues, and this is where transparency comes in around the carbon impacts as well, so that they can be mitigated, so that if I'm an engineer sitting in front of my laptop writing some software, I need to have awareness that if I call this Gen AI call or whatever else, I have to understand this is going to spike the carbon emissions of my product, and I need to find another solution.
Tammy McClellan: Gotcha. Yeah, that makes sense. Tell me. So are you optimistic or pessimistic about Gen AI at this point?
Jesse McCrosky: I think I'm mixed. I think that ultimately solving the climate crisis means simultaneously solving a social crisis. And I think it's very hard to solve climate change without also solving issues of social justice globally. And I think that Gen AI is a tool that might enable some of these conversations to be tackled in a more interesting way.
So I think as long as we're mindful and honest and clear eyed about how we apply this technology, there can be some optimism there. We need to ensure that we have adequate transparency so that people understand the carbon implications of the choices they're making when they're using these systems, but given that, there is potential to do better.
Tammy McClellan: Gotcha. So I know when you and I chatted before, you said that you had a fun story of AI. Did you want to tell us what that is?
Jesse McCrosky: Ah, so actually, I think there's a misunderstanding. The fun story was an expanded version of what I was talking about before, but
Tammy McClellan: Gotcha.
Jesse McCrosky: if we have a moment, I think one thing I want to add when we come back to the idea of how, how transparency can help Gen AI be used more responsibly. So, a lot of people are familiar with the concepts of DevOps or MLOps or CD4ML, these sorts of processes. And I think this is a really critical place for transparency around carbon emissions to be integrated. I think the point I would make is that right now, a software developer that's working in kind of a modern setup has the ability, as they're writing code, to see immediately if the code that they're changing is causing some test to fail, or is causing some performance degradation, or is introducing some bug or whatever else. And I think we need to have the same process for carbon so that it if an engineer is making a choice and for any devs out there, maybe you have a case where you need to use a regular expression, but it seems like too much work to figure it out. "Hey, I can just call a Gen AI model and it'll do it for me as well."
It'll work just fine. And you might make that choice because it saves you a couple minutes or whatever. But if you then see that all of a sudden your dashboard turns red and says, okay, your carbon has just increased like 100 percent or whatever, you're going to come back and you're going to revisit that decision. And also your team is going to see that, the trail of what's happening because of what you've done. And so it creates this sort of accountability in the development process.
Tammy McClellan: All right. So I'm curious. What are the top three recommendations you would give to people who are interested in reducing carbon emissions of AI?
Jesse McCrosky: Good questions. And yeah, I think that's something I didn't really touch on so far, but there are a lot of choices that can be made when applying AI. So we don't need to use the biggest general purpose models for everything. I think that there are cases where a general purpose model is really needed. But um, I think that in most cases, no. And so we can talk about using much simpler application-specific models. We can talk about using a smaller model and fine tuning it for the particular task. There's processes like quantization and distillation that can make models much more carbon efficient and nearly as effective. So investigating these options, and again, I think this kind of hinges on the MLOps setup where you need to be ready to evaluate performance. You need to be able to say "how small can I make this model and still actually meet the requirements in my product." Beyond that, I think it's a matter of providing transparency to the end user. So if you're producing something, if users understand the choices that they're making when they're using that product, there's a lot of different ways this can play out, and this can mean some Gen AI chatbot or something like that, but this also can be, maybe you have an e-commerce product. platform and you're using AI to make recommendations to your users and the recommendations that you make can influence their behavior and it can encourage them to buy more products that are disposable or made in very carbon-intensive ways, and so considering these sorts of externalities as well is really critical.
Tammy McClellan: Gotcha. I'm curious, do we have any questions from the audience at this point?
Sophie Trinder: Yes, we do. And thank you so much, Tammy and Jesse. It's been a really great session on AI here at Decarb, and it really shows the passion in the industry for these technologies, plus the responsibility that we all must take when it comes to AI. I know we'll be hearing a lot more in the coming months. But yes, we've got a few questions from the audience. I just want to shout out first, Jesse, thanks for the fun story on OpenAI, how they were mitigating the problem with data to show more representation through mitigation. It was a really interesting insight, thanks. So one question from the audience, how important is prompt engineering for improvement of AI efficiency?
Jesse McCrosky: Great question, yes, and it's it's really extremely important, because the energy being consumed by the model is going to depend in some complex ways, depending on how many tokens are coming into it, and in quite a direct sense, how many tokens are coming out of it. So if we can reduce the number of tokens going through the system, we reduce the carbon emissions. And this again, I know I'm sounding like a stuck record, but it really depends on the MLOp setup, where we should be able to test and see how short can we make our prompts and still accomplish what we need to do. And this is both the length of the prompt itself and the length of the output. So for example, go back to that example I was talking about where maybe ChatGPT has a little indicator at the top telling you how much carbon has been emitted in your session so far. Maybe if you see that number growing as you're chatting with it, you're going to say, "hey, ChatGPT, please be a little bit more brief with your answers. I don't need the whole kind of colorful language and going on and on about everything." So yes, it's very important.
Sophie Trinder: Super interesting. Thank you. We've got another one on training the AI ML model, which obviously takes a huge amount of data and processing, which in turn causes a lot of emissions. How do you think that we could best counterpart the same?
Jesse McCrosky: Yeah, good question. And I think that I have an article out where I actually talk about how the comparisons are a little bit overwrought, talking about how training a model is equivalent to driving a car some distance or whatever. I think that, um, the comparison, at least so far, thankfully, is not quite accurate because we have many cars on the vehicle and a relatively small number of models being trained. I think the important thing is to keep it that way. I think the important thing is that we need to encourage use of open models and shared models rather than every single organization in the world trying to train their own LLM. And this is why I would be a strong supporter of open-source models. I think it's nice to see that movement.
I think it's potential. It means that organizations, first of all, save their money, but also save their carbon when they want to be able to explore elements in their business. And there's always the potential for fine tuning, for whatever other tools need to be applied to open models to make them suit people's applications.
Sophie Trinder: Amazing. Thank you. And jumping back to sort of problems on data and representation, we've got another question centered around that. So do you think we should promote digital humanism and ethical AI to raise awareness about the need for sustainable AI?
Jesse McCrosky: Yeah, absolutely. I think we're existing at a moment where responsible AI and such is being discussed everywhere. There's very active regulatory work in many different regions of the world. There's many people in academia, in civil society, and in industry doing this sort of work. And I think that green AI should come along for the ride, so to speak, and it should be an important part of how we think about the risks and the potentials of these models.
So, yes.
Sophie Trinder: Amazing. Thanks very much.
Chris Skipper: So that's all for this episode of Environment Variables. If you liked what you heard, you can actually check out the video version of this on our YouTube channel. Links to that as well as everything that we mentioned can be found in the show notes below. While you're down there, feel free to click follow so you don't miss out on the very latest in the world of sustainable software here on Environment Variables. Bye for now!
Asim Hussain: Hey everyone, thanks for listening. Just a reminder to follow Environment Variables on Apple Podcasts, Spotify, Google Podcasts, or wherever you get your podcasts. And please, do leave a rating and review if you like what we're doing. It helps other people discover the show and of course, we want more listeners. To find out more about the Green Software Foundation, please visit greensoftware.foundation Thanks again and see you in the next episode.

  continue reading

86 bölüm

Artwork
iconPaylaş
 
Manage episode 390698491 series 3336430
İçerik Asim Hussain and Green Software Foundation tarafından sağlanmıştır. Bölümler, grafikler ve podcast açıklamaları dahil tüm podcast içeriği doğrudan Asim Hussain and Green Software Foundation veya podcast platform ortağı tarafından yüklenir ve sağlanır. Birinin telif hakkıyla korunan çalışmanızı izniniz olmadan kullandığını düşünüyorsanız burada https://tr.player.fm/legal özetlenen süreci takip edebilirsiniz.
From the recent Decarbonize Software 2023 event, this episode showcases a fireside chat on Responsible AI with Tammy McClellan from Microsoft and Jesse McCrosky from ThoughtWorks. Jesse shares his thoughts and experiences from years of working in the field of Sustainable Tech on the topics of risks, sustainability, and more regarding AI, before answering some questions from the audience.

Learn more about our people:

Find out more about the GSF:

Events:

Resources:

If you enjoyed this episode then please either:
Connect with us on Twitter, Github and LinkedIn!
TRANSCRIPT BELOW:
Asim Hussain:
Hello, and welcome to Environment Variables brought to you by the Green Software Foundation. In each episode, we discuss the latest news and events surrounding green software. On our show, you can expect candid conversations with top experts in their field who have a passion for how to reduce the greenhouse gas emissions of software.
Chris Skipper: Welcome to Environment Variables. Today, we've got another highlight from the recent Decarbonize Software 2023 event. We'll be showcasing the fireside chat on Responsible AI from Jesse McCrosky, Head of Sustainability and Social Change and Principal Data Scientist at ThoughtWorks, and Tammy McClellan, Senior Cloud Solution Architect at Microsoft and Co-Chair of the Community Working Group and Oversight Committee at the Green Software Foundation. They are introduced by our very own Senior Technical Project Manager for open source projects, Sophie Trinder, so it will be her voice that you hear first. So, without further ado, here's the Fireside Chat on Responsible AI.
Sophie Trinder: Hi everyone, I'm Sophie, the technical project manager for our open source projects at the Green Software Foundation. Today I'm going to introduce a special fireside chat to our Decarbonize Software event. We're continuing the conversation that began on the 5th of October at our panel on responsible AI. The conversation surrounding responsible AI is dynamic, oscillating between optimism and skepticism. one side, practitioners believe that AI has the potential to drive sustainable development goals, from responsible consumption to waste management and energy conservation. The promise lies in our improvements in measuring software's environmental impacts, and innovation across energy-efficient algorithms, hardware optimizations, and the growing use of renewable energy sources. On the other side, the rapid expansion of AI, particularly large learning language models, and the insatiable demand for this technology, are raising concerns. If left unchecked, the energy consumption and resource utilization associated with AI make many feel like we're endangering a future where software causes zero harmful environmental impacts. To help us explore the path forward, I'm thrilled to introduce Tammy McClellan, Senior Cloud Solution Architect at Microsoft, and Jesse McCrosky, Head of Responsible Tech and Principal Data Scientist at ThoughtWorks. Thanks, both. Take it away.
Tammy McClellan: Thanks, Sophie. And a hello to all you sustainability addicts. Jesse, hello. Let's start the question with, how do you see the relationship between responsible AI and sustainability?
Jesse McCrosky: Hey Tammy, great question and nice to see you all. So at ThoughtWorks we use a framework that I like which we refer to as the greening of tech and greening by tech and I think this is the best lens through which to view that question. Greening of tech refers to the fact that these systems and especially generative AI as we're talking about now have serious energy consumption, they have serious sustainability issues that need to be tackled.
The other side is greening by tech and recognizes the potential that this technology has to actually improve sustainability of other processes, either within or outside of the tech world. And I think what ties these, these two questions together is issues of transparency and information and ensuring that people have the information they need to make the right decisions for our environment.
Tammy McClellan: I like that, greening of tech and greening for tech. It's my new mantra now. So how can, uh, we use this to make more sustainable solutions?
Jesse McCrosky: So it's a big question. To begin with, I think that I refer to transparency, and when we talk about transparency, a lot of people think that means you share your source code, or you share your model weights, and then you're transparent. Or it means you have to explain the decisions the AI is making, and that's transparency. Transparency is more than that. There's a report I did with the Mozilla Foundation on AI transparency, and we talk about meaningful AI transparency that needs to be legible, auditable, and actionable. And this means that we have to consider the specific stakeholders that the information is being provided to, what are their needs, what are they going to do with this information. So it comes down to the old adage that you can't manage what you can't measure. So for example, in order to support meaningful policy, meaningful regulation, we need to have information about the sustainability characteristics of these systems.
Tammy McClellan: So talk to us a little bit about some possible solutions in this area.
Jesse McCrosky: Yeah, absolutely. So when we're looking at solutions, especially using the kind of transparency lens, we can think about who is the transparency being provided to. So, for example, we can talk about consumers. And right now, consumers are very excited about ChatGPT or whatever else, Stable Diffusion, DALL-E, and everything like that. It's a lot of fun to play with. And they do not have meaningful information about the carbon implications of that play. So someone was suggesting to me that ChatGPT should have a real-time counter across the top somewhere that's telling you how much carbon have you emitted so far in your session, how many, you know, gallons of water have been consumed, whatever else. And it is not a matter of just shaming people, but it's helping people make the right choices, because there might be applications for which ChatGPT is really worthwhile to use, but there's other times that somebody's just idly playing or something like that, and if they realize the implications of doing, they might make other choices. This becomes more interesting when we talk about communication between, for example, model developers and model deployers. So, for example, if somebody is using the OpenAI APIs in their product, they need to be able to have information about what the implications are of those API calls so they can make good choices in how they build their software.
Tammy McClellan: So awareness is key, absolutely. So Jesse, what is the potential for Gen AI to support greater sustainability?
Jesse McCrosky: Yeah, it's an exciting question, and I think there is some potential here. There's a case that ThoughtWorks took, it's a couple years back now, I think, in which we worked with a international manufacturing and services company. They were interested in finding solutions to meet their sustainability goals, and they just weren't sure which way to go.
They weren't sure, "should we start sourcing our energy from a different place, or using different sorts of transportation, or using different industrial processes or offering different products?" And so what we did for them was built a mathematical model of their operations and their supply chains. once we had that mathematical model, we were able to build a sort of scenario modeling dashboard where we could show them like, "hey, if you switch to delivery trucks that are using electricity instead of gas, this is what happens to your emissions, this is what happens to your bottom line, this is what happens to your customers."
And likewise, depending on / considering different product mixes, considering different sourcing, whatever else. So the mathematical model here was not rocket science, to be honest, it was fairly simple stuff. The hard part of this engagement was really understanding the business at the level that we needed to in order to build that model. There were many hours of interviews and poring over notes and internal documents and everything else, as well as actually some basic desk research to determine the necessary carbon emissions factors, that sort of thing. I'm excited at the potential of generative AI to make this sort of process more accessible and more scalable. And I think that we've seen evidence so far that these models do a very good job of looking at these sorts of documents, looking at recordings and interviews, and it may be possible that you could create this model semi automatically with far, far less of the kind of very heavyweight and expensive all sorts of interventions. As well, it was challenging to understand the exchangeability. And so, for example, if the company is buying cotton in one particular country, it might be obvious to us that they can instead buy the same cotton from some other country, and that's the only possible change that could be made. But it's not so simple for the model to figure that sort of thing out automatically.
Whereas GenAI, I think when we connect to these sorts of emissions factors databases, has the potential to make this process much easier.
Tammy McClellan: Yeah, awesome. Let's move a little bit and talk about risks. How do you think businesses can manage the risks of AI?
Jesse McCrosky: Yeah, it's it's a big question. I think everybody's talking about this. And I think what I would say is it's critical to understand that risks must be mitigated, not removed. I think a lot of people are talking, for example, about bias and discrimination, and they say, okay, we're going to produce a model that's perfectly fair and perfectly unbiased, or we're going to eliminate this bias from our model or whatever else. And this is just not the way things work. We live in the real world, and these systems are based on data from the real world. And the real world is unjust, and so we need to be able to be ready to tackle that. So, one example that I like is OpenAI with their DALL-E interface generation system. For a while, maybe some months ago, I think, if you asked it for pictures of lawyers, it was going to give you eight pictures of white men, basically.
And OpenAI recognized that there was a problem there, as did the community, of course. So eventually, OpenAI had a short blog post where they talked about how they were going to fix this. And it was apparently fixed, so when people tried to get pictures, they would see pictures of lawyers, and some of them would be women, and some of them would be of different ethnicities, and everything else. So People were curious how this had been fixed and it turned out that all that OpenAI was doing was just randomly appending words like women or black or Asian or whatever else to these prompts and people were not super impressed with this solution but I think it's an important illustrated example, because it's a mitigation, there was a problem with a model, there was a problem with the data, this is not a problem that can be solved fundamentally, it needed to be mitigated, and they found a way, they said, "here's the harm that's going to come from the system. It's going to not be producing an adequate representation, and we found a way that we can show more representation." So this is the sort of mitigation that companies need to take. So when there's issues, and this is where transparency comes in around the carbon impacts as well, so that they can be mitigated, so that if I'm an engineer sitting in front of my laptop writing some software, I need to have awareness that if I call this Gen AI call or whatever else, I have to understand this is going to spike the carbon emissions of my product, and I need to find another solution.
Tammy McClellan: Gotcha. Yeah, that makes sense. Tell me. So are you optimistic or pessimistic about Gen AI at this point?
Jesse McCrosky: I think I'm mixed. I think that ultimately solving the climate crisis means simultaneously solving a social crisis. And I think it's very hard to solve climate change without also solving issues of social justice globally. And I think that Gen AI is a tool that might enable some of these conversations to be tackled in a more interesting way.
So I think as long as we're mindful and honest and clear eyed about how we apply this technology, there can be some optimism there. We need to ensure that we have adequate transparency so that people understand the carbon implications of the choices they're making when they're using these systems, but given that, there is potential to do better.
Tammy McClellan: Gotcha. So I know when you and I chatted before, you said that you had a fun story of AI. Did you want to tell us what that is?
Jesse McCrosky: Ah, so actually, I think there's a misunderstanding. The fun story was an expanded version of what I was talking about before, but
Tammy McClellan: Gotcha.
Jesse McCrosky: if we have a moment, I think one thing I want to add when we come back to the idea of how, how transparency can help Gen AI be used more responsibly. So, a lot of people are familiar with the concepts of DevOps or MLOps or CD4ML, these sorts of processes. And I think this is a really critical place for transparency around carbon emissions to be integrated. I think the point I would make is that right now, a software developer that's working in kind of a modern setup has the ability, as they're writing code, to see immediately if the code that they're changing is causing some test to fail, or is causing some performance degradation, or is introducing some bug or whatever else. And I think we need to have the same process for carbon so that it if an engineer is making a choice and for any devs out there, maybe you have a case where you need to use a regular expression, but it seems like too much work to figure it out. "Hey, I can just call a Gen AI model and it'll do it for me as well."
It'll work just fine. And you might make that choice because it saves you a couple minutes or whatever. But if you then see that all of a sudden your dashboard turns red and says, okay, your carbon has just increased like 100 percent or whatever, you're going to come back and you're going to revisit that decision. And also your team is going to see that, the trail of what's happening because of what you've done. And so it creates this sort of accountability in the development process.
Tammy McClellan: All right. So I'm curious. What are the top three recommendations you would give to people who are interested in reducing carbon emissions of AI?
Jesse McCrosky: Good questions. And yeah, I think that's something I didn't really touch on so far, but there are a lot of choices that can be made when applying AI. So we don't need to use the biggest general purpose models for everything. I think that there are cases where a general purpose model is really needed. But um, I think that in most cases, no. And so we can talk about using much simpler application-specific models. We can talk about using a smaller model and fine tuning it for the particular task. There's processes like quantization and distillation that can make models much more carbon efficient and nearly as effective. So investigating these options, and again, I think this kind of hinges on the MLOps setup where you need to be ready to evaluate performance. You need to be able to say "how small can I make this model and still actually meet the requirements in my product." Beyond that, I think it's a matter of providing transparency to the end user. So if you're producing something, if users understand the choices that they're making when they're using that product, there's a lot of different ways this can play out, and this can mean some Gen AI chatbot or something like that, but this also can be, maybe you have an e-commerce product. platform and you're using AI to make recommendations to your users and the recommendations that you make can influence their behavior and it can encourage them to buy more products that are disposable or made in very carbon-intensive ways, and so considering these sorts of externalities as well is really critical.
Tammy McClellan: Gotcha. I'm curious, do we have any questions from the audience at this point?
Sophie Trinder: Yes, we do. And thank you so much, Tammy and Jesse. It's been a really great session on AI here at Decarb, and it really shows the passion in the industry for these technologies, plus the responsibility that we all must take when it comes to AI. I know we'll be hearing a lot more in the coming months. But yes, we've got a few questions from the audience. I just want to shout out first, Jesse, thanks for the fun story on OpenAI, how they were mitigating the problem with data to show more representation through mitigation. It was a really interesting insight, thanks. So one question from the audience, how important is prompt engineering for improvement of AI efficiency?
Jesse McCrosky: Great question, yes, and it's it's really extremely important, because the energy being consumed by the model is going to depend in some complex ways, depending on how many tokens are coming into it, and in quite a direct sense, how many tokens are coming out of it. So if we can reduce the number of tokens going through the system, we reduce the carbon emissions. And this again, I know I'm sounding like a stuck record, but it really depends on the MLOp setup, where we should be able to test and see how short can we make our prompts and still accomplish what we need to do. And this is both the length of the prompt itself and the length of the output. So for example, go back to that example I was talking about where maybe ChatGPT has a little indicator at the top telling you how much carbon has been emitted in your session so far. Maybe if you see that number growing as you're chatting with it, you're going to say, "hey, ChatGPT, please be a little bit more brief with your answers. I don't need the whole kind of colorful language and going on and on about everything." So yes, it's very important.
Sophie Trinder: Super interesting. Thank you. We've got another one on training the AI ML model, which obviously takes a huge amount of data and processing, which in turn causes a lot of emissions. How do you think that we could best counterpart the same?
Jesse McCrosky: Yeah, good question. And I think that I have an article out where I actually talk about how the comparisons are a little bit overwrought, talking about how training a model is equivalent to driving a car some distance or whatever. I think that, um, the comparison, at least so far, thankfully, is not quite accurate because we have many cars on the vehicle and a relatively small number of models being trained. I think the important thing is to keep it that way. I think the important thing is that we need to encourage use of open models and shared models rather than every single organization in the world trying to train their own LLM. And this is why I would be a strong supporter of open-source models. I think it's nice to see that movement.
I think it's potential. It means that organizations, first of all, save their money, but also save their carbon when they want to be able to explore elements in their business. And there's always the potential for fine tuning, for whatever other tools need to be applied to open models to make them suit people's applications.
Sophie Trinder: Amazing. Thank you. And jumping back to sort of problems on data and representation, we've got another question centered around that. So do you think we should promote digital humanism and ethical AI to raise awareness about the need for sustainable AI?
Jesse McCrosky: Yeah, absolutely. I think we're existing at a moment where responsible AI and such is being discussed everywhere. There's very active regulatory work in many different regions of the world. There's many people in academia, in civil society, and in industry doing this sort of work. And I think that green AI should come along for the ride, so to speak, and it should be an important part of how we think about the risks and the potentials of these models.
So, yes.
Sophie Trinder: Amazing. Thanks very much.
Chris Skipper: So that's all for this episode of Environment Variables. If you liked what you heard, you can actually check out the video version of this on our YouTube channel. Links to that as well as everything that we mentioned can be found in the show notes below. While you're down there, feel free to click follow so you don't miss out on the very latest in the world of sustainable software here on Environment Variables. Bye for now!
Asim Hussain: Hey everyone, thanks for listening. Just a reminder to follow Environment Variables on Apple Podcasts, Spotify, Google Podcasts, or wherever you get your podcasts. And please, do leave a rating and review if you like what we're doing. It helps other people discover the show and of course, we want more listeners. To find out more about the Green Software Foundation, please visit greensoftware.foundation Thanks again and see you in the next episode.

  continue reading

86 bölüm

Tüm bölümler

×
 
Loading …

Player FM'e Hoş Geldiniz!

Player FM şu anda sizin için internetteki yüksek kalitedeki podcast'leri arıyor. En iyi podcast uygulaması ve Android, iPhone ve internet üzerinde çalışıyor. Aboneliklerinizi cihazlar arasında eş zamanlamak için üye olun.

 

Hızlı referans rehberi