According to a document seen by The Wall Street Journal and individuals who claim to be familiar with the matter, Apple is concerned that AI tools could leak the company’s confidential data. In addition to ChatGPT, Apple has barred staff from using GitHub’s Copilot, a tool that helps write code with auto-completion.
Many businesses, such as banks, financial services, and healthcare institutions, have avoided adopting ChatGPT out of fear that their employees could inadvertently give the chatbot sensitive proprietary information. Samsung banned employee use of generative AI utilities like ChatGPT after discovering that staff had uploaded sensitive source code to the platform. The company was said to be concerned that data transmitted to artificial intelligence platforms including Bing and Google Bard could end up being disclosed to other users. JPMorgan Chase and Verizon have similarly banned the use of these AI tools.
OpenAI has already sold Morgan Stanley a private ChatGPT service that allows employees to ask questions and analyze content in thousands of the bank’s market research documents. Microsoft is working on a version of ChatGPT targeted at business customers to address privacy concerns.
The move comes in the context of Apple apparently working on its own large language models and AI technologies, led by senior vice president of Machine Learning and AI Strategy John Giannandrea. Giannandrea previously worked for Google and now reports directly to Apple CEO Tim Cook. The Wall Street Journal did not provide more information about what Apple’s AI efforts encompass at this time.
OpenAI’s ChatGPT has been accessible on the web and via several third-party iOS apps for some time, but yesterday the company released the first official ChatGPT app for the iPhone and iPad.