Sina Technology, Beijing, January 27 evening news, according to reports, internal Slack messages show that ChatGPT, an artificial intelligence chatbot, has been used by Amazon in many different job functions, including answering interview questions, writing software code and creating training documents.
Amazon's Amazon Web Services (AWS) cloud division has set up a small working group to better understand the impact of artificial intelligence on its business, an employee said on Slack. Through testing, the team found that ChatGPT "does a great job" of answering AWS customer support questions. In addition, the AI tool was "very good" at creating training documents and "very powerful" on enterprise strategy questions.
Also on Slack, the employee said ChatGPT was "excellent" at writing troubleshooting guides and answering "difficult" support questions for AWS Aurora database engineers. It was also able to "figure out the customer's corporate goals.
Since its release in November, ChatGPT has set the tech world on fire and has been hailed as one of the most impressive technological innovations of 2022. Recently, Microsoft also announced an additional multi-billion dollar investment in ChatGPT's developer, OpenAI. Apparently, ChatGPT's sudden rise has also attracted Amazon's attention.
But at the same time, it also prompted Amazon to warn employees to be "careful" about using the artificial intelligence tool in the workplace. An Amazon lawyer told employees not to share confidential company information, including code being written, with ChatGPT.
The attorney stated, "This is important because the information you input may be used as iterative training data for ChatGPT, and we don't want its output to contain or resemble our confidential information."
ChatGPT's rapid popularity has the potential to disrupt many industries, including media, academia, and healthcare, prompting efforts to find new use cases for chatbots and their potential impact. At the same time, however, ChatGPT's sudden popularity has raised many new ethical questions.
Emily Bender, a professor of linguistics at the University of Washington, says, "OpenAI is far from transparent about how it uses data, and if user-input data is used for training, I expect companies will wonder whether, after months of widespread use of ChatGPT, it might be possible to get access to a company's confidential information?"
Editor:Liu Mingliang
User comments