Technology

Congress is reportedly limiting staff use of AI models like ChatGPT


Congress apparently has strict limits on the use of ChatGPT and similar generative AI tools. Axios claims to have obtained a memo from House of Representatives administrative chief Catherine Szpindor setting narrow conditions for the use of ChatGPT and similar large language AI models in congressional offices. Staff are only allowed to use the paid ChatGPT Plus service due to its tighter privacy controls, and then only for “research and evaluation,” Szpindor says. They can’t use the technology as part of their everyday work.

House offices are only allowed to use the chatbot with publicly accessible data even when using Plus, Szpindor adds. The privacy features have to be manually enabled to prevent interactions from feeding data into the AI model. ChatGPT’s free tier isn’t currently allowed, as are any other large language models. 

We’ve asked the House for comment and will let you know if we hear back. A use policy like this wouldn’t be surprising, though. Institutions and companies have warned against using generative AI due to the potential for accidents and misuse. Republicans drew criticism for using an AI-generated attack ad, for instance, while Samsung staff supposedly leaked sensitive data through ChatGPT while using the bot for work. Schools have banned these systems over cheating concerns. House restrictions theoretically prevent similar problems, such as AI-written legislation and speeches.

The House policy might not face much opposition. Both sides of Congress are attempting to regulate and otherwise govern AI. In the House, Representative Ritchie Torries introduced a bill that would require disclaimers for uses of generative AI, while Representative Yvette Clark wants similar disclosures for political ads. Senators have conducted hearings on AI and put forward a bill to hold AI developers accountable for harmful content produced using their platforms.



Original Source Link

Related Articles

Back to top button