5 Important Takeaways from the 2023 #shifthappens Conference | Pillsbury – Internet & Social Media Law Blog
In speaking at this past week’s #shifthappens Conference, I had the pleasure of discussing both the potential and pitfalls posed by generative AI with fellow panelists David Pryor Jr., Alex Tuzhilin, Julia Glidden and Gerry Petrella. Our wide-ranging discussion covered how regulators can address the privacy, security and transparency concerns that underlie this transformative technology. Though no one would deny the inherent complexity of many of these challenges, our session—as well as many other discussions during the conference—suggest some key takeaways:
- Trust Is Paramount … and Hard to Come By.
A large portion of the public is distrustful of big tech, and of AI in particular. Although the public eagerly embraced tech advances in the early days of Facebook, Twitter and Google, that relationship has soured with privacy concerns and the amplification of extreme voices. People are concerned that everything they do online will result in tailored advertisements. Are they being tracked by their phones? Is Alexa listening to every conversation?
AI amplifies this concern. Will AI replace your job? Will AI dehumanize society? Will AI dominate humans? This distrust is particularly evident in the United States, where fully half of Americans feel that AI will harm their interests in the next 20 years. By comparison, only 22 percent of East Asians feel that way.
- Information and Transparency Would Help Ease Public Concerns.
AI has a “Black Box” problem: Users can see the input and output of these programs but do not have a clear view of the algorithms and inner workings behind them. This opacity around AI’s decision-making processes makes it much more difficult to ensure these programs are operated ethically and securely, so implementing controls to avoid biased or inaccurate outputs and developing strategies to enhance the “explainability” of AI models is becoming increasingly important. - Private Sector Leadership Is Needed.
There is a need for private sector leadership in AI to develop guardrails and promote beneficial uses. AI is often compared to the Wild West in that there seem to be no rules or accepted standards under which AI service providers are operating. Although the Biden Administration has brought together several leading AI companies, the results are still far from satisfying. - Policy Churn Creates Uncertainty.
The U.S. Congress has held multiple hearings and has produced more than 30 bills on AI so far this year. Many states are creating AI legislation that will greatly complicate the development of a single U.S. policy. This policy churning is creating uncertainty, and the international situation is similar. - International Standards Are Needed.
There are several AI issue areas in need of international standards that would facilitate beneficial AI regulations:
-
- Watermarking
- Privacy and cybersecurity
- Medical uses
- Critical infrastructure
- IP and attribution
- Bias, equity and civil rights
- Investment and R&D
- Employment
- Education
- Energy usage
Of course, that last takeaway encompasses years of potential legislation across a host of industries. To facilitate and accelerate progress on AI regulation and standardization, industry stakeholders recently joined together to launch the nonprofit AI Trust Foundation, which will work to create an environment for developing AI standards and transparency that ensure effectiveness, safety and accountability. With a broad range of stakeholder memberships and advisory groups focused on critical infrastructure, IP, liability, national security, equity and other issues, the foundation will represent a solid step toward protecting individuals and governments and reducing public anxiety.
[View source.]