OpenAI and Concerns with the Rollout of Chat GPT-4

The latest iteration of the AI language model is bigger, faster, and potentially dangerous 

The OpenAI CEO, Sam Altman, in a recent interview with ABC News talked about the rollout of GPT-4. In the interview, Altman starkly warned the new technology could be used for “large-scale disinformation” and potentially assist in cyberattacks. For what this technology promises, Altman has a strong reason to believe that certain ‘bad actors’ who don’t put some of the safety limits that OpenAI puts on. Gifted with the potential to reshape society, for better or worse, our fascination with AI and Chat GPT in particular has reached a crossroads. If a better faster Chat GPT poses this much risk to information security, we the public should be concerned also. 

As the world embraces this technology (and quickly at that), Altman insists that ‘feedback’ from the public and regulators at OpenAI will become absolutely necessary in deterring the potential negative consequences of the technology. Surprisingly the interview didn’t teeter too heavily along the lines of the other elephant in the room, the technology’s disruptive potential to replace jobs. Although to some degree inevitable, Altman urged the public that we should look at ChatGPT as more of a tool, not as a replacement, adding that “human creativity is limitless, and we find new jobs. We find new things to do.”

Where GPT-4 Outperforms GPT-3.5

While Chat GPT-3.5 was released only a few months ago, it is already considered the fastest-growing consumer application in history. The app hit 100 million monthly active users in just a few months. To put that into perspective TikTok took nine months to reach that many users and Instagram took nearly three years, according to a UBS study.

Now Chat GPT-4 is more multilingual and accurately answers thousands of questions across 26 languages. To put the upgrade into perspective, Chat GPT’s growing functionality and accessibility will likely invite a wider swath of eager users. While OpenAI claims that ChatGPT-4 would be harder to trick and will offer the best-ever results on factuality, there are still some vulnerabilities in which Sam Altman fears could be exploited for misinformation.

We Should Still Be Concerned about Polarizing Misinformation

Regardless of whether or not ChatGPT-4 offers the best-ever results on factuality, Sam Altman admitted to being “a little bit scared” of the power and risks language models in general pose to society. Altman warned that their ability to automatically generate text, images, or code could be used to launch disinformation campaigns or cyber attacks. The technology could be abused by individuals, groups, or authoritarian governments.

(For the record, OpenAI has not yet disclosed technical information on the language model’s size, architecture, and training data.)

An ‘Engine’ For Better Reasoning 

The program in some instances can give users factually inaccurate information, in part due to its use of deductive reasoning rather than memorization, according to OpenAI. “One of the biggest differences that we saw from GPT-3.5 to GPT-4 was this emergent ability to reason better,” said Mira Murati, OpenAI’s Chief Technology Officer, further adding that they “want these models to see and understand the world more like we do.”

While we aren’t exactly ‘there’ yet in terms of an all wise omnipotent AI, the way to think about these models is as “a reasoning engine, not a fact database,” Altman mentioned. While they can indeed also act as a fact database, that would kind of be missing the point about what’s really special about the technology, which is its growing ability to reason, not memorize.

About The Author

Reply