May 1, 2023
U.S. Department of Commerce
1401 Constitution Ave NW
Washington, DC 20230
To: Gina Raimondo, US Secretary of Commerce
Re: Public Comments on AI Safety and Concerns
Dear Secretary Raimondo:
As the Founder and CEO of Prestidge Group, an international agency incorporated in the United States focusing on executive branding, PR, and speaker relations, we regularly work with and advise leading technology founders and CEOs actively involved in innovating solutions related to the metaverse, Web 3.0 and artificial intelligence (AI). Additionally, as a trained neurolinguistic programmer, certified digital marketer, and cutting-edge leader when it comes to the metaverse and new technologies, I serve on the boards of several Web 3.0 startups where I am regularly asked to counsel clients about the future of technology and its potential impact on consumers and workers alike particularly since the introduction of Open AI’s ChatGPT in November 2022.
Ethical decision-making must remain in human hands
Based on these experiences, I strongly urge that theCommerce Dept and the Biden Administration take timely action regarding the rapid advancement of AI and smart robotics.
I do not support a ban on AI technologies nor wish to impede AI innovation, particularly given its tremendous potential to help address complex, global issues such as climate change, disease, poverty, and hunger. But the U.S. and governments around the world need to play a significant role in safely guiding the trajectory of AI technologies for the long-term benefit, prosperity, and safety of humanity. That doesn’t just mean imposing safety restrictions on how AI is deployed worldwide and ensuring that displaced human workers get the support and retraining they need to work effectively and safely alongside machines, but also ensuring that ethical decision-making remains solely in the hands of qualified human beings.
Critical turning point for humanity
While AI is not a new concept, the accelerating pace at which this technology is developing has been astounding; we are now at a critical turning point for the human race. The large language model employed by ChatGPT and competing chatbots, for example, are leaps and bounds ahead of anything we have ever experienced before in terms of machine intelligence, and we have been rapidly witnessing its influence on a vast array of industries that goes beyond the previously conceived notion that AI will simply automate mundane processes.
ChatGPT and similar large language models are advanced, adaptive, and able to offer value within a myriad of fields of expertise. Indeed, what it can do (and what still needs refining) has made ChatGPT a symbol of the AI revolution quickly unfolding before us.
This has further led to long-overdue discussions about what the full impact of that revolution will mean for humanity. Though most economists predict that ultimately AI will create more jobs than will be lost, according to the most recent reports as many as 300 million workers worldwide will be displaced by AI. Goldman Sachs recently reported that as many as two-thirds of all U.S. jobs could be fully or partially automated right now, a staggering statistic that rightly has many American workers anxious. These concerns are justified, as there has been little public discussion about who will be accountable for retraining and upskilling these displaced workers for the jobs of tomorrow, and even less dialogue about what that training will consist of, who will conduct it, and who will pay for it.
Other concerns relate to privacy issues, whether AI machines will help address racial or ethnic inequity or make things worse, and whether AI can be trusted to engage safely with humans on an ongoing basis given that they can only emulate human behaviors and feelings, not actually experience them.
Some worry that AI will make humans obsolete, while others express great concern about what will happen to our culture if much of human art and literature is replaced by material generated by machines that aren’t designed so much to innovate and defy convention as they are to see and replicate patterns.
With so many questions still unanswered, it makes sense why many would be alarmed.
Calls to pause AI advancement unrealistic
Recently, tech leaders including Elon Musk and Steve Wozniak have signed an open letter from the non-profit organization Future of Life Institute, calling for a six-month pause on the training of and roll out of AI systems more powerful than GPT-4. The letter highlights many concerns about issues such as potential widespread job loss, the proliferation of fake information, the risk of making humans obsolete, and the eventual downfall of society as we know it should the development of AI be left in the hands of unfettered and unelected tech leaders.
While these concerns are valid, especially when addressing a topic as complex and as uncharted as this, I believe that:
- Six months is too short a period of time to truly assess the situation surrounding this technology.
- It is unrealistic to believe that other countries will follow suit should the United States enforce a halt in AI development, even temporarily.
- It will not be enough to consult AI labs and independent experts about the future of AI.
Despite these concerns, that more than 27,000 people from hundreds of countries have now signed it clearly demonstrates that regardless of one’s profession, nationality, ethnicity, or political beliefs, people clearly understand AI is perhaps the single most influential innovation to occur in our lifetime, and the decisions we make today will shape the future of the entire human race for generations.
According to Ray Kurzweil, inventor and futurist, and Google’s Director of Engineering – who has consistently predicted future events within the tech world such as when a computer would be able to surpass and defeat the world chess champion – we are swiftly approaching AI singularity: the point at which artificial intelligence will finally surpass humanity. This, he believes, is just 22 years away, in 2045. However, even earlier, as soon as 2029, he expects AI will pass a valid Turing test and achieve human levels of intelligence.
This isn’t meant to be a premonition of an impending apocalyptic collapse, but a point for alertness. AI is evolving rapidly, and I believe decision-makers will need to take swift action to ensure it is harnessed for the benefit of humanity and not left to the devices of profit-driven technology firms.
While the EU has been more aggressive, for the most part, the United States has largely let the technology sector regulate itself. Search engines, social media companies, content aggregator sites and e-commerce sites like Amazon have enjoyed vast latitude to obtain, classify and sell personal data, often without users’ knowledge or explicit permission, and cut that data into ever smaller slices while at the same time are exempt from laws that would hold them accountable for the content that runs on their channels and infrastructure.
Indeed, social media and search engine companies use algorithms and incentive programs that reward users based on the number of likes, comments, and pass-along rates rather than on content quality and accurate information. As a result, both individuals and institutions have suffered great harm from the spread of misinformation, online bullying and harassment, internet fraud, the pirating of copyrighted media, manipulative activity by foreign and domestic terrorists and hate groups, and even deliberate attempts to influence and undermine election outcomes. Data personalization by algorithms makes it possible for people never to see or read or be exposed to any idea or concept they disagree with, which infantilizes users and increases polarization. And despite calls for them to do so, and numerous Congressional and U.S. agency hearings and investigations, these companies have done little to address these problems. Now many of these very same companies and the VC firms who first funded them are investing billions of dollars and resources into developing and advancing AI. It would be a mistake to allow them to commercialize AI technologies without restrictions and strict enforcement rules to hold them accountable for how these tools operate and function.
Global technologies require global rules
To leave it up to each individual country to regulate AI technology would also be a mistake. Data privacy and protection rules have evolved on a jurisdiction-by-jurisdiction level, which continues to cause endless problems, leads to contradictory guidance, and places a huge onus on businesses that must follow different laws in the different legal jurisdictions in which they operate. A global solution with agreed-upon regulations by all nations is required. Though the UN may offer a way forward, UN resolutions too often lack strong enforcement capabilities to punish rogue players. Convening annual AI Summits similar to the COP Summits the UN holds in relation to climate change, however, would be useful in helping to raise awareness of the challenges (and potential) of AI and allow governments and NGOs around the world to come together to share and discuss viable solutions to protect humanity and human culture through the safe and ethical use of AI.
Secure input beyond tech, economic and government experts
Finally, to ensure a comprehensive regulatory framework for this technology, it is imperative that a committee of diverse experts, lawyers, scientists, psychologists, technologists, labor union representatives, physicians, business ethicists, artists, writers, creativity experts, college and university presidents, data privacy executives, economists, communications and neurolinguistic experts, organizational anthropologists, humanists, philosophers, weaponization and disarmament professionals, social justice and equity activists, workforce training leaders and other relevant parties comes together to balance all the different perspectives surrounding AI. The World Economic Forum’s Davos Conference is a good example of how such a panel or committee of global experts could work effectively, and WEF has already been talking about and issuing reports and recommendations about AI and automation for years. The next step would be to set up an AI Forum, not as a subset of Davos, but as a separate meeting devoted to making sense of and coming to a consensus on ethical AI guidelines that empower innovation while restricting and sanctioning misuse.
Approach AI similar to how we address climate change or nuclear weapons
The AI Revolution poses an existential threat to humanity in the same way that nuclear weapons and climate change do. Like the rise and proliferation of nuclear weapons and the rising temperatures and sea levels brought on by carbon emissions, AI is a result of human endeavors and advancements in (and new problems created by) technology. Because the world collectively recognized the horrific threat nuclear weapons posed, that threat, though still present, has significantly been reduced over the years. We still have much work to do to address climate change and limited time in which to do so. But we can’t wait until that huge charge is resolved before we tackle AI. Technology is neutral–it can be used for both good and bad. It is in the collective interest of humanity to embrace the positive benefits of AI, while working to contain and set limits so as not to cause significant harm to it.
Halting the development of artificial intelligence is not the answer, nor is its regulation by one jurisdiction. What’s required is a thoughtful, unified, and global approach. I kindly urge you to take timely action for the sake of our future and use the position the U.S. holds as a world leader to invite other agencies and organizations around the globe to join in this effort.
Thank you for the opportunity to share my feedback.
Founder & CEO