- Amazon is destroying the use of him in interviews at work.
- Interviews helped by him present ethical challenges and have sparked debate in Silicon Valley.
- Some Amazon employees consider the means and the useful, while others see them as dishonest.
He’s generating tools like coding assistants and “Teleprompt” applications feed people living responses during work interviews, giving candidates seeking an advantage.
Amazon, one of the world’s largest employers, wants to curb this growing trend.
The latest Amazon guidelines are separated with internal recruiters in the company say job applicants can be disqualified from the job process if they are found to have used a tool during work interviews.
Amazon believes that the use of he in the interviews gives candidates an “unjust advantage” and forbids the company from evaluating their “authentic” skills and experiences, the instructions taken from Business Insider, say.
“To ensure a fair and transparent recruitment process, please do not use genus tools during your interview unless they are clearly allowed,” the instructions say. “Failure to comply with these instructions can result in disqualification from the recruitment process.”
The instructions also tell Amazon recruiters to share these rules with candidates for work.
Crowddown highlights one of the many ethical challenges that are departing from increasing artificial generating intelligence. Amazon has limited the use of he’s tool employees as a chatgpt, though he encourages them to use internal applications to increase productivity. “Hacking” work interviews with him are a growing trend, causing debate throughout Silicon Valley.
In a recent internal internal conversation from BI, some Amazon employees argued the need to stop the tools of he during work interviews when they can improve the quality of work.
“This is certainly an increasing trend, especially for technology/SDE roles,” said one of the full messages, referring to software development engineers.
A Amazon spokesman said the company recruitment process gives priority by ensuring that candidates hold a high bar. “
When applicable, candidates must admit that they will not use “unauthorized tools, like Genai, to support them” during an interview, the spokesman added to an email.
Tips to identify the use of Gen AI tools
The trend has become a big problem for Amazon that even shared internal tips on how to discover applicants using Gen tools he during interviews.
Indicators, the instructions say, include:
Candidate can be seen by pressing as questions are asked. (Note, it is not uncommon for candidates to write/press the question asked while preparing to answer.) The candidate seems to be reading their answers rather than responding naturally. This may include their correction when they read a word badly. The candidate’s eyes seem to be following the text or looking elsewhere, rather than seeing their primary screen or moving naturally during the conversation. The candidate provides a sure answer that they do not directly or directly address the question. The candidate responds to the results of the vehicle when they appear to be incorrect or insignificant. This is often demonstrated by the candidate who is distracted or confused as they are trying to understand the results.
While candidates are allowed to talk about how they used generating applications to “achieve efficiency” in their current or previous roles, they are strictly forbidden to use them during work interviews, add Amazon’s instructions.
A recent video produced by a company that claims to have received a job offer from Amazon after using its coding assistant during an interview raised alarms domestically, a person acquainted with the case said. This person asked not to be identified because they were not authorized to speak to the media.
The ‘flow’ problem
This is not just a problem in Amazon. Work researchers are becoming increasingly courageous in interviews, using various tools of him. A recent experiment revealed that it was easy to cheat on work interviews using tools as a chatgt.
In October, collaborator Xai Greg Yang wrote in X that he would capture a job candidate by deceiving with the Anthropic Claude Service.
“The candidate tried to use Claude during the interview, but it was very clear,” Yang wrote.
Matthew Bidwell, a business professor at the Wharton school, told Bi that these tools of him “definitely penetrated the main flow, and employers are concerned about it”, citing conversations with students in his executive-management program.
Bidwell said it is a problem when employers cannot detect these tools and candidates for work are uncomfortable by accepting their use.
“There is a strong risk for people to use it to abuse their abilities, and I think it’s somewhat unethical,” Bidwell said.
Growing grass?
Not everyone is against him. Some Silicon Valley companies are open to allow these applications in work interviews because they already use them at work. Others are doing the technical interview an open book test, but adding questions for a deeper assessment.
Some Amazon employees look less concerned about it.
One person wrote in a recent conversation, seen by BI, in the Amazon that their team was “studying” the possibility of providing a generator assistant for candidates and changing their employment access. Another person said that even if a candidate was hired after using these tools, Amazon had “other mechanisms” to address those who do not meet expectations for their roles.
A third person asked if Amazon could benefit from this. The use of that generator can be “dishonest or unprofessional,” this person said, but on the other hand, is “raising the ribbon” for Amazon by improving the quality of the interview.
“If judged only by the outcome, he can be considered a barbed,” this person wrote.
Do you have one advice? Contact this reporter via email at ecim@businsinsider.com or signal, telegram, or Whatsapp at 650-942-3061. Use a personal email address and non -working device; Here is our guide to sharing information safely.