www.xbdev.net
xbdev - software development
Tuesday May 14, 2024
home | contact | Support | Programming.. More than just code .... | Artificial Intelligence (AI)... going beyond 'knowledge' ....

     
 

Artificial Intelligence (AI)...

going beyond 'knowledge' ....

 


AI > Open-Source AI Is Impossible (Not About Right and Wrong)


A culture war in AI is emerging between those who believe that the development of models should be restricted or unrestricted by default. In 2024, that clash is spilling over into the law, and it has major implications for the future of open innovation in AI. What technologies are currently under scrutiny in AI? Today, the AI technologies under most scrutiny are generative AI models that have learned how to read, write, draw, animate, and speak, and that can be used to power tools like ChatGPT. Intertwined with the wider debate over AI regulation is a heated and ongoing disagreement over the risk of open models—models that can be used, modified, and shared by other developers—and the wisdom of releasing their distinctive settings, or “weights,” to the public.


Prevailing regulatory trends ominously loom over transparency and competition in AI, poised to stifle openness and accountability.


Since the launch of powerful open models like the Llama, Falcon, Mistral, and Stable Diffusion families, critics have pressed to keep other such genies in the bottle. What are critics advocating for regarding open models? “Open source software and open data can be an extraordinary resource for furthering science,” wrote two U.S. senators to Meta (creator of Llama), but “centralized AI models can be more effectively updated and controlled to prevent and respond to abuse.” Think tanks and closed-source firms have called for AI development to be regulated like nuclear research, with restrictions on who can develop the most powerful AI models. Last month, one commentator argued in IEEE Spectrum that “open-source AI is uniquely dangerous,” echoing calls for the registration and licensing of AI models.

The debate is surfacing in recent efforts to regulate AI. How is the debate reflected in recent AI regulatory efforts? First, the European Union has just finalized its AI Act to govern the development and deployment of AI systems. Among its most hotly contested provisions was whether to apply these rules to “free and open-source” models. Second, following President Biden’s executive order on AI, the U.S. government has begun to compel reports from the developers of certain AI models, and will soon launch a public inquiry into the regulation of “widely-available” AI models.

However our governments choose to regulate AI, we need to promote a diverse AI ecosystem: from large companies building proprietary superintelligence to everyday tinkerers experimenting with open technology. Open models are the bedrock for grassroots innovation in AI. Why is it crucial to promote a diverse AI ecosystem? I serve as head of public policy for Stability AI (makers of Stable Diffusion), where I work with a small team of passionate researchers who share media and language models that are freely used by millions of everyday developers and creators around the world. My concern is that this grassroots ecosystem is uniquely vulnerable to mounting restrictions on who can develop and share models. Eventually, these regulations may lead to limits on fundamental research and collaboration in ways that erode this culture of open development, which made AI possible in the first place and helps make it safer.

Open AI Is Impossible?


Open models are instrumental in fostering transparency and competition within the realm of AI. They are poised to support an array of creative, analytical, and scientific applications in the years ahead, extending beyond the capabilities of current text and image generators. Anticipated applications include personalized tutors, desktop healthcare assistants, and backyard film studios, revolutionizing essential services, altering online information access, and reshaping both public and private institutions. In essence, AI is set to become indispensable infrastructure.

As articulated in previous addresses to the U.S. Congress and U.K. Parliament, the digital landscape of the future must not be monopolized by a handful of opaque "black box" systems controlled by major tech corporations. Presently, our digital economy operates on inscrutable systems dictating content delivery, information access, advertising exposure, and online interactions. Without the ability to scrutinize these systems or develop competitive alternatives, we risk repeating the centralized control witnessed in the evolution of the Internet.

The pivotal role of open models in this context cannot be overstated. By releasing a model's weights, researchers, developers, and regulators gain insight into the inner workings of AI engines, enabling assessment of suitability and vulnerability mitigation prior to real-world deployment. Everyday developers and small enterprises can leverage these open models to innovate, customize AI applications, refine safer AI models for specific tasks, create more inclusive AI representations, or establish new AI ventures without exorbitant development costs.

Transparency and competition are foundational to a thriving digital ecosystem, as evidenced by the ubiquity of open-source software like Android and Linux. The widespread adoption of such software has contributed significantly to global value creation, estimated at up to US $8.8 trillion. Notably, recent strides in AI owe much to open research initiatives, collaborative code libraries like PyTorch, and the collective efforts of researchers and developers worldwide.

Freedom (or Regulations)


Fortunately, no government has ventured to abolish open models altogether. If anything, governments have resisted the most extreme calls to intervene. What is the current stance of governments regarding open models in AI? The White House declined to require premarket licenses for AI models in its executive order. And after a confrontation with its member state governments in December, the E.U. agreed to partially exempt open models from its AI Act. Meanwhile, Singapore is funding a US $52 million open-source development effort for Southeast Asia, and the UAE continues to bankroll some of the largest available open generative AI models. French President Macron has declared “on croit dans l’open-source”—we believe in open-source.

However, the E.U. and U.S. regulations could put the brakes on this culture of open development in AI. What potential impact do regulations from the E.U. and U.S. have on the culture of open development in AI? For the first time, these instruments establish a legal threshold beyond which models will be deemed “dual use” or “systemic risk” technologies. Those thresholds are based on a range of factors, including the computing power used to train the model. Models over the threshold will attract new regulatory controls, such as notifying authorities of test results and maintaining exhaustive research and development records, and they will lose E.U. exemptions for open-source development.

In one sense, these thresholds are a good faith effort to avoid overregulating AI. What is the rationale behind the establishment of these regulatory thresholds? They focus regulatory attention on future models with unknown capabilities instead of restricting existing models. Few existing models will meet the current thresholds, and those that do first will be models from well-resourced firms that are equipped to meet the new obligations.

What concerns arise from this regulatory approach? In another sense, however, this approach to regulation is troubling, and augurs a seismic shift in how we govern novel technology. Grassroots innovation may become collateral damage.



Do regulations impose problems on everyday developers?


Regulating "upstream" components like models could disproportionately deter research in "downstream" systems. Do regulations impact research in downstream systems? Many of the restrictions for models surpassing the threshold presuppose that developers are sophisticated firms with formal relationships to model users. For instance, the U.S. executive order mandates developers to report individuals who can access the model’s weights and detail the measures taken to secure those weights. Similarly, E.U. legislation necessitates developers to conduct "state of the art" evaluations and systematically monitor incidents involving their models.

For the first time, these instruments establish a legal threshold beyond which models will be deemed "dual use" or "systemic risk" technologies. What significant change do these regulations introduce?

However, the AI ecosystem transcends corporate labs to encompass numerous developers, researchers, and creators freely accessing, refining, and sharing open models. Who constitutes the AI ecosystem beyond corporate labs? They have the flexibility to iterate on powerful "base" models to develop safer, less biased, or more reliable "fine-tuned" models for community dissemination.

If governments treat these everyday developers akin to the companies initially releasing the model, challenges may arise. What are the potential implications of treating everyday developers similarly to companies? Developers operating from dorm rooms and dining tables may struggle to comply with proposed premarket licensing and approval requirements or the initially drafted "one size fits all" evaluation, mitigation, and documentation mandates by the European Parliament. Such individuals may hesitate to contribute to model development or other software endeavors if fearing liability for downstream use or abuse of their research. Why might individuals releasing models on platforms like GitHub face challenges under regulatory frameworks? Therefore, individuals releasing new and enhanced models on platforms like GitHub shouldn't face identical compliance burdens as entities like OpenAI or Meta.


The criteria underlying these thresholds lack clarity. What is a significant concern regarding the criteria for these thresholds? Before erecting barriers around the development and dissemination of a valuable technology, governments should evaluate the initial risk associated with the technology, the residual risk after considering all available legal and technical mitigations, and the opportunity cost of misjudgment.

However, a framework for determining whether these models genuinely pose a serious and unmitigated risk of catastrophic misuse, or for assessing the impact of these regulations on AI innovation, is still absent. What key aspects are missing in the current regulatory framework? The initial U.S. threshold—1026 floating point operations (FLOPs) in training computation—emerged as a passing footnote in a research paper. How did the initial U.S. threshold come about? Conversely, the EU threshold of 1025 FLOPs is notably more conservative and only surfaced in the final month of negotiations. How does the EU threshold compare to the U.S. threshold? Crossing this threshold in the foreseeable future is a distinct possibility. What potential eventuality regarding the thresholds is likely? Furthermore, both governments retain the authority to alter these benchmarks at their discretion, potentially encompassing a vast number of smaller yet increasingly potent models, many of which can operate locally on laptops or smartphones. What additional concern arises from the governments' ability to adjust the thresholds?


Why is it crucial to regulate AI while preserving openness?


Regulating AI while safeguarding openness is imperative. Why is it crucial to regulate AI while preserving openness? There is consensus that AI regulation is necessary, with all stakeholders—from model developers to application deployers—playing a role in mitigating emerging risks. Who are the key stakeholders in AI regulation? However, any new regulations must consider the impact on grassroots innovation in open models. What is a key consideration for new regulations regarding open models? Presently, well-intentioned regulatory efforts run the risk of stifling open development. What risk do current regulatory efforts pose? Taken to the extreme, these frameworks may restrict access to foundational technology, burden hobbyists with corporate obligations, or inhibit the exchange of ideas and resources among everyday developers. What are the potential consequences of extreme regulatory measures?

In many respects, models are already subject to regulation through a complex patchwork of legal frameworks governing the development and deployment of technology. How are models regulated currently? Where gaps exist in existing laws, such as U.S. federal regulations concerning abusive, fraudulent, or political deepfakes, these gaps should be addressed. What steps should be taken regarding existing regulatory gaps?

However, preemptive constraints on model development should be considered only as a last resort. When should preemptive constraints on model development be considered? Regulation should focus on addressing emerging risks while preserving the culture of open development that facilitated these breakthroughs and continues to foster transparency and competition in the field of AI. What should be the primary focus of AI regulation?





























 
Advert (Support Website)

 
 Visitor:
Copyright (c) 2002-2024 xbdev.net - All rights reserved.
Designated articles, tutorials and software are the property of their respective owners.