OpenAI is introducing its ChatGPT app on Apple’s App Store, in a push to extend the reach of its AI chatbot. This makes access to the chatbot more user-friendly but also has potentially negative consequences. The mobile app replicates the functionality of the ChatGPT website, enabling users to ask questions and receive AI-generated responses directly on their smartphones or tablets.
Mobile AI
The app also incorporates Whisper, OpenAI’s voice recognition technology, enabling users to interact with the AI engine through spoken commands.
This app release follows a series of recent product launches by various tech giants and startups, all vying to bring generative AI tools to market since the initial launch of ChatGPT in November.
The ChatGPT app will contribute to the advancement of OpenAI’s extensive language models, which form the foundation of chatbots. Initially available in the US, the app will gradually expand to other countries and will also be made compatible with Android devices in the coming weeks.
Tech groups are also working to make generative AI accessible on mobile handsets instead of relying on cloud servers, aiming to expand the availability of the ChatGPT iPhone app and lower the associated computing expenses.
Regulation is urgently needed
The emergence of the ChatGPT app has intensified the ongoing examination of the burgeoning field of AI by regulatory bodies and governments worldwide. This scrutiny stems from growing concerns expressed by many AI ethicists and experts as well as an ever-growing number of users regarding the potential for technology misuse and the widespread job losses it could cause.
The utilization of AI algorithms on mobile devices raises concerns about the accelerated spread of misinformation. With AI algorithms capable of generating content at unprecedented speeds, there’s an increased risk of fake news dissemination and even disruption to democracies. The accessibility of generative AI technology on mobile handsets enables the creation and distribution of false information anywhere and everywhere, potentially exacerbating the challenges faced by societies in combating misinformation and preserving the integrity of public discourse.
On a more personal level, AI apps could lead to greater dishonesty. Take dating for example. Users of Tinder, Bumble and co can no longer be sure that the person they’re chatting to isn’t using bots to enhance their writing or photos. In other words, we may be increasingly speaking to modified versions of each other if not entirely fake profiles that appear more real than ever before.
Sam Altman, a key contributor to the development of AI technology, recently appeared before a US Senate subcommittee on privacy, technology, and the law. Altman advocated for the regulation of this rapidly advancing technology, emphasizing the need to establish frameworks that mitigate the potential for abuse and promote responsible AI usage.
Key takeaways
- ChatGPT app is launching as an app on the Apple App Store
- The accessibility of generative AI on mobile devices raises worries about the spread of misinformation and fake news
- The co-founder of ChatGPT agrees that regulation is urgently required to halt the negative effects of AI on humanity