Top Moments From ChatGPT Creator's Congressional Testimony

The Digital Distribution Era: A New Frontier for Content Creators

As we navigate the digital distribution era, it's clear that the traditional music industry model is no longer sufficient. The rise of streaming services and social media has changed the way we consume content, and content creators are looking for new ways to reach their audiences. But with great power comes great responsibility, and lawmakers are finally starting to take notice.

Senator Durbin recently spoke out about the need for protections for content generators and creators in generative AI. He warned that if we don't step in, this technology could get away from us and wreak havoc on our economy. We couldn't agree more. The economic model of content creation is changing rapidly, and it's essential that we find a way to ensure that creators are compensated fairly for their work.

The concern about misinformation is also very real. As social media platforms continue to spread false information, it's becoming increasingly difficult to know what's true and what's not. This is not just a problem for social media companies; it's a problem for all of us. We need to find ways to verify the accuracy of content before it spreads, and that requires some serious coordination between industry leaders and lawmakers.

That's why we're advocating for a more nimble and smarter approach to regulation. Instead of relying on Congress to create new laws, we think that an agency specifically designed to deal with this kind of technology would be a better solution. It would allow us to respond quickly to emerging issues and ensure that the public is protected.

But what about licenses? Should we require companies to get licenses to produce generative AI tools? We believe that's a reasonable question. If these tools are capable of generating high-quality content, shouldn't there be some kind of accountability?

Section 230, the law that protects social media platforms from liability for user-generated content, is also up for debate. While it's true that these companies don't have traditional publishing licenses, we think that they should take responsibility for the content that's being generated on their platforms.

The military application of AI is a very different story. As drones become increasingly capable, the potential for autonomous warfare is a serious concern. If AI can select targets on its own, it raises questions about accountability and the ethics of war. We need to have a national conversation about how we're going to regulate this technology in order to prevent harm.

One of the most significant implications of generative AI is its potential to disrupt our entire way of life. As we explore new frontiers in art, music, and literature, it's essential that we consider the impact on society as a whole. How will this technology change the nature of creativity? Will it create new opportunities or exacerbate existing inequalities?

These are just a few of the questions that need to be answered as we navigate the digital distribution era. As content creators, industry leaders, and lawmakers, we have a responsibility to ensure that this technology is used for the greater good. We're excited to see where this journey takes us and how we can work together to shape the future of creative industries.

"WEBVTTKind: captionsLanguage: enlonger fantasies of Science Fiction they were real and present the promises of during cancer or developing new understandings of physics and biology or modeling climate and weather all very encouraging and hopeful but we also know the potential Harms and we've seen them already weaponized disinformation housing discrimination harassment of women and impersonation fraud voice cloning deep fakes these are the potential risks despite the other rewards and for me perhaps the biggest nightmare is the looming new Industrial Revolution the displacement of millions of workers the loss of huge numbers of jobs the need to prepare for this new Industrial Revolution in skill training and relocation that may be required and already industry leaders are calling attention to those challenges to quote chat gbt this is not necessarily the future that we want we need to maximize the good over the bad Congress has a choice now we had the same Choice when we Face social media we failed to seize that moment the result is Predators on the internet toxic content exploiting children creating dangers for them and Senator Blackburn and I and others like Senator Durbin on the Judiciary Committee are trying to deal with it kids online safety act but Congress failed to meet the moment on social media now we have the obligation to do it on AI before the threats and the risks become real sensible safeguards are not in opposition to Innovation accountability is not a burden far from it they are the foundation of how we can move ahead while protecting public Trust perhaps the biggest nightmare is the looming new Industrial Revolution the displacement of millions of workers the loss of huge numbers of jobs the need to prepare for this new Industrial Revolution in skill training and relocation that may be required and already industry leaders are calling attention and I think my question is what kind of an innovation is it going to be is it going to be like the printing press that diffused knowledge and power and learning widely across the landscape that empowered ordinary everyday individuals that led to Greater flourishing that led above all to Greater Liberty or is it going to be more like the atom bomb huge technological breakthrough but the consequences severe terrible continue to haunt us to this day before we release gpt4 our latest model we spent over six months conducting extensive evaluations external red teaming and dangerous capability testing we are proud of the progress that we made gpt4 is more likely to respond helpfully and truthfully and refuse harmful requests than any other widely deployed model of similar capability however we think that regulatory intervention by governments will be critical to mitigate the risks of increasingly powerful models for example the U.S government might consider a combination of Licensing and testing requirements for development and release of AI models above a threshold of capabilities there are several other areas I mentioned in my written testimony where I believe that companies like ours can partner with governments including ensuring that the most powerful AI models adhere to a set of safety requirements facilitating processes to develop and update safety measures and examining opportunities for Global coordination and as you mentioned I think it's important that companies have their own responsibility here no matter what Congress does to that end IBM urges Congress to adopt a Precision regulation approach to AI this means establishing rules to govern the deployment of AI in specific use cases not regulating the technology itself such an approach would involve four things first different rules for different risks the strongest regulation should be applied to use cases with the greatest risks to people and Society second clearly defining risks there must be clear guidance on AI uses or categories of AI supported activity that are inherently high risk this common definition is key to enabling a clear understanding of what regulatory requirements will apply in different use cases and contexts third be transparent so AI shouldn't be hidden consumers should know when they're interacting with an AI system and that they have recourse to engage with a real person should they so desire no person anywhere should be tricked into interacting with an AI system and finally showing the impact for higher risk use cases companies should be required to conduct impact assessments that show how their systems perform against tests for bias and other ways that they could potentially impact the public and to a test that they've done so you may have had in mind the effect on on jobs which is really my biggest nightmare in the long term let me ask you what your biggest nightmare is and whether you share that concern like with all technological revolutions I expect there to be significant impact on jobs but exactly what that impact looks like is very difficult to predict if we went back to the the other side of a previous technological Revolution talking about the jobs that exist on the other side um you know you can go back and read books of this it's what people said at the time it's difficult I believe that there will be far greater jobs on the other side of this and the jobs of today will get better I think it's important first of all I think it's important to understand and think about gpd4 as a tool not a creature which is easy to get confused and it's a tool that people have a great deal of control over and how they use it and second gpt4 and things other systems like it are good at doing tasks not jobs and so you see already people that are using gpt4 to do their job much more efficiently by helping them with tasks should we be concerned about models that can large language models that can predict survey opinion and then can help organizations into these fine-tuned strategies to elicit behaviors from voters should we be worried about this for our elections yeah uh thank you Senator Hawley for the question it's one of my areas of greatest concern the the the more General ability of these models to manipulate to persuade uh to provide sort of one-on-one uh you know interactive disinformation I think that's like a broader version of what you're talking about but given that we're going to face an election next year and these models are getting better I think this is a significant area of concern I think there's a lot there's a lot of policies that companies can voluntarily adopt and I'm happy to talk about what we do there I do think some regulation would be quite wise on this topic uh someone mentioned earlier it's something we really agree with people need to know if they're talking to an AI if content that they're looking at might be generated or might not I think it's a great thing to do is to make that clear I think we also will need rules guidelines about what what's expected in terms of disclosure from a company providing a model that could have these sorts of abilities that you talk about so I'm nervous about it should we be concerned about that for its corporate applications for the monetary applications for the manipulation that that could come from that Mr almond uh yes we should be concerned about that to be clear openai does not we're not off you know we don't have an ad-based business model so we're not trying to build up these profiles of our users we're not we're not trying to get them to use it more actually we'd love it if they use it less because we don't have enough gpus but I think other companies are already and certainly will in the future use AI models to create very good ad predictions of what a user will like my view is that we probably need a cabinet level uh organization within the United States in order to address this and my reasoning for that is that the number of risks is large the amount of information to keep up on is so much I think we need a lot of technical expertise I think we need a lot of coordination of these efforts so there is one model here where we stick to only existing law and try to shape all of what we need to do and each agency does their own thing but I think that AI is going to be such a large part of our future and is so complicated and moving so fast this does not fully solve your problem about a dynamic world but it's a step in that direction to have an agency that's full-time job is to do this I personally have suggested in fact that we should want to do this at a global way we've lived through Napster yes but that was something that real real cost a lot of artists a lot of money oh I understand yeah for sure digital distribution era I don't I don't know the numbers on jukebox on the top of my head as a research release I can I can follow up with your office but it's not jukebox is not something that gets much attention or usage it was put out to to show that something's possible well Senator Durbin just said you know and I think it's a fair warning to you all if we're not involved in this from the get-go and you all already are a long way down the path on this but if we don't step in then this gets away from you so are you working with a copyright office are you considering protections for Content generators and creators in generative AI yes we are absolutely engaged on that again to reiterate my earlier point we think that content creators content owners need to benefit from this technology exactly what the economic model is we're still talking to artists and content owners about what they want I think there's a lot of ways this can happen but very clearly no matter what the law is the right thing to do is to make sure people get significant upside benefit from this new technology with an election upon us with primary elections upon us that we're going to have all kinds of misinformation and I just want to know what you're planning on doing it doing about it I know we're going to have to do something soon not just for the images of the candidates but also for misinformation about the actual polling places and election rules thank you Senator that we we talked about this a little bit earlier we are quite concerned about the impact this can have on elections I think this is an area where hopefully the entire industry and the government can work together quickly there's there's many approaches and I'll talk about some of the things we do but before that I think it's tempting to use the frame of social media but this is not social media this is different and so the the response that we need is different you know this is a tool that a user is using to help generate content more efficiently than before they can change it they can test the accuracy of it if they don't like it they can get another version but it still then spreads through social media or other ways like chat gbt is a you know single player experience where you're just using this um and so I think as we think about what to do that's that's important to understand there's a lot that we can and do do there um there's things that the model refuses to generate we have policies we also importantly have monitoring so at scale uh we can detect someone generating a lot of those tweets even if generating one tweet is okay you agree with me the the simplest way and the most effective way is have an agency that is more Nimble and smarter than Congress which should be easy to create overlooking what you do yes we'd be enthusiastic about that you agree with that Mr Marcus absolutely do you agree with that Miss Montgomery I would have some nuances I think we need to build on what we have in place already today we don't have an agency Regulators uh wait a minute no no no we don't have an agency that regulates the technology so should we have one but a lot of the issues I I don't think so a lot of these wait a minute wait a minute so IBM says we don't need an agency uh interesting should we have a license required for these tools so so what we believe is that we need to raise a simple question should you get a license to produce one of these tools I think it comes back to some of them potentially yes so what I said at the onset is that we need to um clearly Define risks do you claim section 230 applies in this area at all we are not a platform company and we've again long advocated for a reasonable Care standard and section I just don't understand how you could say that you don't need an agency to deal with the most transformative technology maybe ever well I I think we have existed is this a transformative technology that can disrupt Life as we know it good and bad I think it's a transformative technology certainly and the conversations that we're having here today have been really bringing to light the fact that this is the domains and the issues this one with you has been very enlightening to me military application how can AI change the Warfare and you got one minute I got one minute yeah all right this is that's a tough question for one minute um this is very far out of my area of expertise uh but I'll give you one example a drone can a drone you program you can plug into a drone the coordinates and it can fly out it goes over this Target and it drops a missile on this car moving down the road and somebody's watching it could AI create a situation where a drone can select the target itself I think we shouldn't allow that well can it be done sure thankslonger fantasies of Science Fiction they were real and present the promises of during cancer or developing new understandings of physics and biology or modeling climate and weather all very encouraging and hopeful but we also know the potential Harms and we've seen them already weaponized disinformation housing discrimination harassment of women and impersonation fraud voice cloning deep fakes these are the potential risks despite the other rewards and for me perhaps the biggest nightmare is the looming new Industrial Revolution the displacement of millions of workers the loss of huge numbers of jobs the need to prepare for this new Industrial Revolution in skill training and relocation that may be required and already industry leaders are calling attention to those challenges to quote chat gbt this is not necessarily the future that we want we need to maximize the good over the bad Congress has a choice now we had the same Choice when we Face social media we failed to seize that moment the result is Predators on the internet toxic content exploiting children creating dangers for them and Senator Blackburn and I and others like Senator Durbin on the Judiciary Committee are trying to deal with it kids online safety act but Congress failed to meet the moment on social media now we have the obligation to do it on AI before the threats and the risks become real sensible safeguards are not in opposition to Innovation accountability is not a burden far from it they are the foundation of how we can move ahead while protecting public Trust perhaps the biggest nightmare is the looming new Industrial Revolution the displacement of millions of workers the loss of huge numbers of jobs the need to prepare for this new Industrial Revolution in skill training and relocation that may be required and already industry leaders are calling attention and I think my question is what kind of an innovation is it going to be is it going to be like the printing press that diffused knowledge and power and learning widely across the landscape that empowered ordinary everyday individuals that led to Greater flourishing that led above all to Greater Liberty or is it going to be more like the atom bomb huge technological breakthrough but the consequences severe terrible continue to haunt us to this day before we release gpt4 our latest model we spent over six months conducting extensive evaluations external red teaming and dangerous capability testing we are proud of the progress that we made gpt4 is more likely to respond helpfully and truthfully and refuse harmful requests than any other widely deployed model of similar capability however we think that regulatory intervention by governments will be critical to mitigate the risks of increasingly powerful models for example the U.S government might consider a combination of Licensing and testing requirements for development and release of AI models above a threshold of capabilities there are several other areas I mentioned in my written testimony where I believe that companies like ours can partner with governments including ensuring that the most powerful AI models adhere to a set of safety requirements facilitating processes to develop and update safety measures and examining opportunities for Global coordination and as you mentioned I think it's important that companies have their own responsibility here no matter what Congress does to that end IBM urges Congress to adopt a Precision regulation approach to AI this means establishing rules to govern the deployment of AI in specific use cases not regulating the technology itself such an approach would involve four things first different rules for different risks the strongest regulation should be applied to use cases with the greatest risks to people and Society second clearly defining risks there must be clear guidance on AI uses or categories of AI supported activity that are inherently high risk this common definition is key to enabling a clear understanding of what regulatory requirements will apply in different use cases and contexts third be transparent so AI shouldn't be hidden consumers should know when they're interacting with an AI system and that they have recourse to engage with a real person should they so desire no person anywhere should be tricked into interacting with an AI system and finally showing the impact for higher risk use cases companies should be required to conduct impact assessments that show how their systems perform against tests for bias and other ways that they could potentially impact the public and to a test that they've done so you may have had in mind the effect on on jobs which is really my biggest nightmare in the long term let me ask you what your biggest nightmare is and whether you share that concern like with all technological revolutions I expect there to be significant impact on jobs but exactly what that impact looks like is very difficult to predict if we went back to the the other side of a previous technological Revolution talking about the jobs that exist on the other side um you know you can go back and read books of this it's what people said at the time it's difficult I believe that there will be far greater jobs on the other side of this and the jobs of today will get better I think it's important first of all I think it's important to understand and think about gpd4 as a tool not a creature which is easy to get confused and it's a tool that people have a great deal of control over and how they use it and second gpt4 and things other systems like it are good at doing tasks not jobs and so you see already people that are using gpt4 to do their job much more efficiently by helping them with tasks should we be concerned about models that can large language models that can predict survey opinion and then can help organizations into these fine-tuned strategies to elicit behaviors from voters should we be worried about this for our elections yeah uh thank you Senator Hawley for the question it's one of my areas of greatest concern the the the more General ability of these models to manipulate to persuade uh to provide sort of one-on-one uh you know interactive disinformation I think that's like a broader version of what you're talking about but given that we're going to face an election next year and these models are getting better I think this is a significant area of concern I think there's a lot there's a lot of policies that companies can voluntarily adopt and I'm happy to talk about what we do there I do think some regulation would be quite wise on this topic uh someone mentioned earlier it's something we really agree with people need to know if they're talking to an AI if content that they're looking at might be generated or might not I think it's a great thing to do is to make that clear I think we also will need rules guidelines about what what's expected in terms of disclosure from a company providing a model that could have these sorts of abilities that you talk about so I'm nervous about it should we be concerned about that for its corporate applications for the monetary applications for the manipulation that that could come from that Mr almond uh yes we should be concerned about that to be clear openai does not we're not off you know we don't have an ad-based business model so we're not trying to build up these profiles of our users we're not we're not trying to get them to use it more actually we'd love it if they use it less because we don't have enough gpus but I think other companies are already and certainly will in the future use AI models to create very good ad predictions of what a user will like my view is that we probably need a cabinet level uh organization within the United States in order to address this and my reasoning for that is that the number of risks is large the amount of information to keep up on is so much I think we need a lot of technical expertise I think we need a lot of coordination of these efforts so there is one model here where we stick to only existing law and try to shape all of what we need to do and each agency does their own thing but I think that AI is going to be such a large part of our future and is so complicated and moving so fast this does not fully solve your problem about a dynamic world but it's a step in that direction to have an agency that's full-time job is to do this I personally have suggested in fact that we should want to do this at a global way we've lived through Napster yes but that was something that real real cost a lot of artists a lot of money oh I understand yeah for sure digital distribution era I don't I don't know the numbers on jukebox on the top of my head as a research release I can I can follow up with your office but it's not jukebox is not something that gets much attention or usage it was put out to to show that something's possible well Senator Durbin just said you know and I think it's a fair warning to you all if we're not involved in this from the get-go and you all already are a long way down the path on this but if we don't step in then this gets away from you so are you working with a copyright office are you considering protections for Content generators and creators in generative AI yes we are absolutely engaged on that again to reiterate my earlier point we think that content creators content owners need to benefit from this technology exactly what the economic model is we're still talking to artists and content owners about what they want I think there's a lot of ways this can happen but very clearly no matter what the law is the right thing to do is to make sure people get significant upside benefit from this new technology with an election upon us with primary elections upon us that we're going to have all kinds of misinformation and I just want to know what you're planning on doing it doing about it I know we're going to have to do something soon not just for the images of the candidates but also for misinformation about the actual polling places and election rules thank you Senator that we we talked about this a little bit earlier we are quite concerned about the impact this can have on elections I think this is an area where hopefully the entire industry and the government can work together quickly there's there's many approaches and I'll talk about some of the things we do but before that I think it's tempting to use the frame of social media but this is not social media this is different and so the the response that we need is different you know this is a tool that a user is using to help generate content more efficiently than before they can change it they can test the accuracy of it if they don't like it they can get another version but it still then spreads through social media or other ways like chat gbt is a you know single player experience where you're just using this um and so I think as we think about what to do that's that's important to understand there's a lot that we can and do do there um there's things that the model refuses to generate we have policies we also importantly have monitoring so at scale uh we can detect someone generating a lot of those tweets even if generating one tweet is okay you agree with me the the simplest way and the most effective way is have an agency that is more Nimble and smarter than Congress which should be easy to create overlooking what you do yes we'd be enthusiastic about that you agree with that Mr Marcus absolutely do you agree with that Miss Montgomery I would have some nuances I think we need to build on what we have in place already today we don't have an agency Regulators uh wait a minute no no no we don't have an agency that regulates the technology so should we have one but a lot of the issues I I don't think so a lot of these wait a minute wait a minute so IBM says we don't need an agency uh interesting should we have a license required for these tools so so what we believe is that we need to raise a simple question should you get a license to produce one of these tools I think it comes back to some of them potentially yes so what I said at the onset is that we need to um clearly Define risks do you claim section 230 applies in this area at all we are not a platform company and we've again long advocated for a reasonable Care standard and section I just don't understand how you could say that you don't need an agency to deal with the most transformative technology maybe ever well I I think we have existed is this a transformative technology that can disrupt Life as we know it good and bad I think it's a transformative technology certainly and the conversations that we're having here today have been really bringing to light the fact that this is the domains and the issues this one with you has been very enlightening to me military application how can AI change the Warfare and you got one minute I got one minute yeah all right this is that's a tough question for one minute um this is very far out of my area of expertise uh but I'll give you one example a drone can a drone you program you can plug into a drone the coordinates and it can fly out it goes over this Target and it drops a missile on this car moving down the road and somebody's watching it could AI create a situation where a drone can select the target itself I think we shouldn't allow that well can it be done sure thanks\n"