![]() ![]() ![]() The fear is that an AI model could be deployed to plot out, say, the genetic makeup of a dangerous virus, which could be synthesized using commercial genetic material in a lab. In synthetic biology, researchers and companies can order synthetic nucleic acids from commercial providers, which they can then use to genetically engineer products. Of particular concern here is the production of synthetic nucleic acids - genetic material - using AI. The DHS will evaluate the potential for AI to be used to produce CBRN threats (as well as its potential to counter them), and the DOD will produce a study that looks at AI biosecurity risks and comes up with recommendations to mitigate them. The DHS will also establish an AI Safety and Security Board comprised of experts from the private and public sector, which will advise the government on the use of AI in “critical infrastructure.” Notably, these rules largely apply to systems that are developed going forward - not what’s already out there.įears that AI could be used to create chemical, biological, radioactive, or nuclear (CBRN) weapons are addressed in a few ways. The National Institute of Standards and Technology will also set red team testing standards that these companies must follow, and the Departments of Energy and Homeland Security will evaluate various risks that could be posed by those models, including the threat that they could be employed to help make biological or nuclear weapons. The Department of Commerce will determine the technical thresholds that models must meet for the rule to apply to them, likely limiting it to the models with the most computing power. They must also share results of their risk assessment, or red team, testing with the government. The order invokes the Defense Production Act to require companies to notify the federal government when training an AI model that poses a serious risk to national security or public health and safety. The Biden administration made sure to frame the order as a way to balance AI’s potential risks with its rewards: “It’s the next step in an aggressive strategy to do everything on all fronts to harness the benefits of AI and mitigate the risks,” White House deputy chief of staff Bruce Reed said in a statement. The responsible use (and creation) of safer AI systems is encouraged as much as possible. There will be new reporting and testing requirements for the AI companies behind the largest and most powerful models. The agencies and departments will also develop guidelines that AI developers must adhere to as they build and deploy this technology, and dictate how the government uses AI. These include guidance on the responsible use of AI in areas like criminal justice, education, health care, housing, and labor, with a focus on protecting Americans’ civil rights and liberties. They may also depend on if those agencies’ abilities to make such regulations are challenged in court.īroadly summarized, the order directs various federal agencies and departments that oversee everything from housing to health to national security to create standards and regulations for the use or oversight of AI. While the order has more teeth to it than the voluntary commitments Biden has secured from some of the biggest AI companies, many of its provisions don’t (and can’t) have the force of law behind them, and their effectiveness will largely depend on how the agencies named within the order carry them out. It also shows the limits of the executive branch’s power. The lengthy order is an ambitious attempt to accommodate the hopes and fears of everyone from tech CEOs to civil rights advocates, while spelling out how Biden’s vision for AI works with his vision for everything else. ![]() “This landmark executive order is a testament of what we stand for: safety, security, trust, openness, American leadership, and the undeniable rights endowed by a creator that no creation can take away,” Biden said in a short speech before signing the order. The order, which the president signed on Monday, builds on previous administration efforts to ensure that powerful AI systems are safe and being used responsibly. President Joe Biden’s long-promised Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence is an attempt to do just that, through the lens of the administration’s stated goals and within the limits of the executive branch’s power. Since the widespread release of generative AI systems like ChatGPT, there’s been an increasingly loud call to regulate them, given how powerful, transformative, and potentially dangerous the technology can be. ![]()
0 Comments
Leave a Reply. |
Details
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |