Generative AI can make the government mechanism less annoying

User Avatar

As the infrastructure for safely integrating generative artificial intelligence (AI) into the US technology sector continues to be addressed, governments at various levels in the US are also grappling with the use and regulation of AI-powered tools like ChatGPT.

OpenAI, the parent company of ChatGPT, only continues to grow in reach and popularity. With its first office outside San Francisco and a new branch in London, OpenAI now expects to open its second official office in Dublin.

Federal government

In July, ChatGPT creator OpenAI faced its first major regulatory threat with an FTC investigation demanding answers to questions about the continued number of complaints accusing the AI ​​startup of misusing consumer data and increasing cases of ‘ hallucinations’ that make up facts. or recoveries at the expense of innocent people or organizations.

The Biden administration expects to release its first guidance on how the federal government can use AI in the summer of 2024.

Local government

US Senate Majority Leader Chuck Schumer (D-NY) predicted in June that the new AI legislation was just months away from final stages, coinciding with the European Union entering the final stages of negotiations on its EU AI Act came into effect.

On the other hand, while some municipalities are adopting guidelines for their employees to harness the potential of generative AI, other US government agencies are imposing restrictions out of concerns about cybersecurity and accuracy, according to a recent report from WIRED.

City officials across the US told WIRED that governments at all levels are looking for ways to leverage these generative AI tools to improve some of the “most annoying features of bureaucracy by streamlining routine paperwork and improving the public’s ability to access get to and improve understanding of compact government material. ”

See also  Meta introduces Purple Llama: improving the safety and security of generative AI

However, this long-term mission is also hampered by the legal and ethical obligations enshrined in the country’s transparency laws, election laws and other laws, creating a clear boundary between the public and private sectors.

For example, on May 8, the U.S. Environmental Protection Agency (EPA) denied its employees access to ChatGPT under a (now finalized) FOIA request, while the U.S. State Department in Guinea is embracing the tool and using it to conduct speeches and set up social media. media reports.

There is no denying that 2023 has been the year of accountability and transparency, starting with the fallout and collapse of FTX, which continues to shake our financial infrastructure like modern-day Enron.

“Everyone values ​​accountability, but it’s taken to another level when you’re literally the government,” said Jim Loter, interim Chief Technology Officer for the City of Seattle.

In April, Seattle released its preliminary generative AI guidelines for its employees, while Iowa State made headlines last month after an assistant superintendent used ChatGPT to determine which books should be removed and banned from Mason City, under a recently issued law that prohibits texts containing descriptions of ‘sexual acts’.

For the remainder of 2023 and into early 2024, city and state agencies are expected to begin releasing the first wave of generative AI policies that will address the balance between using AI-powered tools like ChatGPT with entering text prompts that may contain sensitive information that may contain sensitive information. may violate public records laws and disclosure requirements.

Currently, Seattle, San Jose, and Washington State have warned their respective employees that any information entered into a tool like ChatGPT may automatically be subject to disclosure requirements under current public records laws.

See also  Louis Vuitton continues the luxury NFT fashion trends in 2024 | | NFT News |

This concern also extends to the high likelihood that sensitive information will then be included in corporate databases used to train generative AI tools, opening the doors to potential misuse and the spread of inaccurate information.

For example, municipal employees in San Jose (CA) and Seattle must fill out a form every time they use a generative AI tool, while the state of Maine is prioritizing cybersecurity issues and banning the entire executive branch of employees from using generative AI. instruments for the remainder of 2023.

According to Loter, employees in Seattle have expressed interest in using generative AI to summarize even lengthy investigative reports from the city’s Office of Police Accountability, which contain both public and private information.

When it comes to large language models (LLMs) on which data is trained, there is still an extremely high risk of machine hallucinations or mistranslating specific language that could convey a completely different meaning and effect.

For example, San Jose’s current guidelines regarding the use of generative AI to create a public document or press release are not prohibited. However, the chance that the AI ​​tool replaces certain words with incorrect synonyms or associations is high (e.g. citizens vs. residents).

Either way, the next maturation period of AI has arrived, taking us far beyond the early days of word processing tools and other machine learning capabilities that we have often ignored or overlooked.

Editor’s note: This article was written by an employee of nft now in collaboration with OpenAI’s GPT-3.

Source link

Share This Article
Leave a comment