Chat GPT: The Beginning of the End, or End of the Beginning?

Chat GPT was released in November 2022 and has caused shockwaves throughout the industry.
Open A.I., an artificial intelligence research facility based in San Francisco and started in 2015, has been heavily featured in the news lately thanks to their flagship product: Chat GPT. The program is a highly-advanced natural language processing model that is powered by artificial intelligence and designed to generate human-like text based on the given prompts or questions. After using Chat GPT for a while, one is readily amazed by its fast and natural language processing abilities and ability to converse like an actual human being. But Chat GPT can do more than just converse; it can write complex essays, produce coding, engage in philosophical dialogue, and even predict stocks better than fund managers.
However, with a $10 billion investment in Open A.I. from Microsoft and with Google and Meta introducing their own artificial intelligence chatbots (Bard and LLaMA respectively), it is easy to say that there has been a dedicated rush or urgency by tech companies to develop the most advanced and comprehensive A.I. to rival the competition. Aside from merely generating text however, artificial intelligence nowadays can create veritable simulacrum of human faces and environments, duplicate near-flawless renditions of famous voices, and even create artwork and make music (with vocals!).

Elon Musk, entrepreneur and notable rich man, supported Open AI in its early days.
Needless to say, this rapid development of artificial intelligence has opened up a whole new world of possibilities for our world, but has also sparked an intensifying debate over the ramifications of such a powerful tool. Even Elon Musk, who helped co-found Open A.I. and fund their operation from the early days has come out against the deluge of A.I. software. Musk, along with other software engineers and industry executives, wrote an open letter calling for a 6-month pause on A.I. software development until proper safety protocols and regulations could be put in place. Some of the fears exuded by this group and others include the dissemination of misinformation, A.I. replacing human jobs, national security risks, use in cybercrime, and more. This letter came out on the eve of Open A.I.’s release of Chat GPT 4.0, the latest rendition of the language model. Sam Altman did not contribute nor respond to the letter.
Sam Altman Stands Before Congress

Sam Altman, founder of Open AI, swears an oath before the Senate.
However, Altman was able to express similar concerns about his trailblazing technology when he stood before the Senate Judiciary Committee this week. With companies in the U.S. and abroad covertly and aggressively competing to gain an edge in the A.I. war, Altman shared his concerns on the need for government regulation to delineate what A.I. should be able to do in its growing impact on society. “I think if this technology goes wrong, it can go quite wrong,” Altman said with an air of foreboding, echoing the sentiments of Musk and others. When asked by Congresswoman Amy Klobuchar if A.I. has the potential to spread misinformation in the upcoming presidential elections, Altman responded that he is “quite concerned” with how the technology could influence the public.
However, even if the government were to step in and dictate the parameters of A.I. development and usage, that in itself could carry negative consequences. As espoused by John McGinnis, a law professor at the University of Northwestern, “Anything that slows down AI here will not slow down AI in places like China. The fact is that AI…is very intertwined with national security, and the United States needs to remain a leader in AI.” China is America’s direct technological competitor, and with the current frigidity of our relationship, could overtake America’s superiority in the artificial intelligence space and threaten national security. Furthermore, McGinnis expresses skepticism at the effectiveness of the government regulation “misinformation”, as he says there is “real danger” in allowing potentially bias government entities to discern what the public should and shouldn’t hear.

In a similar case, Mark Zuckerberg had to explain technological concepts to an aging Senate.
Another issue brought to light in Altman’s appearance is the fact that many of the members of Congress are far too old to really comprehend how the technology works and how it can be applied. The American public already experienced the lack of techno-literacy harbored by our lawmakers when Mark Zuckerberg, CEO of Meta, testified before the Senate in 2018 on Facebook’s privacy concerns. Zuckerberg had to explain basic concepts to the aging politicians, sometimes to the point of absurdity. In Altman’s case, the same problems apply. How can we expect our lawmakers to create effective and fair regulations when they fail to study or understand the industry and technology they are regulating?
Though A.I. technology is beginning to affect society in positive and negative ways, it is still far too early to determine whether a national catastrophe is at hand due to the meteoric influence of the software. It is interesting to note however, that while Altman perhaps did not support Musk’s early trepidations about the technology, he is now seeing the writing on the digital wall. Does Altman know something that the general public doesn’t? Or does is he simply providing cover for himself while he continues to develop a handsomely funded new technology? Regardless, if disaster is headed towards our changing world, Altman strongly suggests “work[ing] with the government to prevent that from happening.” As if the government has always had a great strategy for handling disasters.