Dscoop Community
+00:00 GMT
Education
September 25, 2024

AI in the Workplace

AI in the Workplace
# Artificial Intelligence
# Business Management
# Strategy

A look at the print industry's hottest topic through the lens of Human Resources, including 3 suggested actions to take before using AI in the workplace.

AI in the Workplace

By Claudia St. John, Founder and CEO, The Workplace Advisors


Conversations about artificial intelligence (AI) have been everywhere recently. In the USA, Congress has held hearings about it. Worldwide, news outlets have covered it extensively. Some of you are already using it to work faster and smarter, with the help of Dscoop's "AI Innovators" series.

But how does AI impact your company's Human Resources policies and procedures?

According to the Pew Research Center, 62% of Americans believe AI will have a major impact on workers, but only 28% believe it will impact them directly. Unfortunately, AI is already impacting employees. My workplace consultancy's print clients are beginning to see "layoff due to AI" named as a reason for non-employment.

AI in the Hiring Process

AI tools used in the hiring process have been praised for saving hiring managers valuable time and creating a diverse pool of applicants by removing bias from the initial review process. Concerns have been raised that there is unintentional bias built into these tools.

Resume-review tools can use predictive analysis to determine what candidate profile would be the best fit for an open position and then compare received electronic resumes to find the “best available” candidates. However, if a candidate uses certain words or phrases which may not fit the AI tool’s expectations, the candidate will receive a lower evaluation for no real reason.

More concerning are tools that analyze an applicant’s personality, knowledge and communication skills using recorded responses to interview questions and facial expressions. These tools assess a candidate’s fit for a job by matching them to a profile of the company’s “ideal employee” using appearance, communication skills, speech patterns, body language, personality, etc. However, some of these tools have been found to be biased eliminating people of certain genders, races, ethnicities and disabilities by giving lower scores for factors (such as facial structure, accents, hair style, or wearing glasses or head scarves) that do not match the “ideal” parameters in the programming.

Regulations on the use of these tools are already in place. In April 2023, four federal agencies – the US Equal Employment Opportunity Commission (EEOC), the Department of Justice (DOJ), the Consumer Financial Protection Bureau (CFPB) and the Federal Trade Commission (FTC) – issued a joint statement addressing the concerns about AI and its potential impacts. The statement covered several topics including defining AI, acknowledging its potential positive uses and negative impacts, highlighting potential areas for discrimination and affirming each agency’s commitment "to monitor the development and usage of automated systems and promote responsible innovation" and "pledge to vigorously use our collective authorities to protect individuals’ rights regardless of whether legal violations occur through traditional means or advanced technologies.”

Illinois, Maryland and New York City have already passed laws regulating the use of “automated employment decisions tools” in the hiring process, with many other states and cities considering similar laws.

See Claudia's take on why US print owners should not expect a large upcoming pool of job candidates.

AI-Generated Content

Most of the latest news is around “chatbots” and the AI-generated content they produce. OpenAI’s ChatGPT and many other tools are now available to the public by simply downloading the software or phone app and setting up an account.

In the workplace, chatbots can be used to research topics and to generate content such as policies, procedures, emails, letters and disciplinary action. On the positive side, AI used for HR purposes can help effectively address legalities, uncomfortable topics and messages for general audiences.

However, AI has also been shown to generate content that lacks empathy, is non-specific, disregards the privacy of others, does not offer face-to-face interactions or contradicts itself. Asking the same question in different ways could give different results which could complicate or confuse the issue more.

Beyond these concerns are the inherent limitations of chatbots as they are built on a Large Language Model, which relies on many available data sources. However, the end results are only as good and valid as the data it references, which is not always valid or accurate. For example, Wikipedia is an often-used resource but, since it relies on user-generated content, it has been proven to be only 80% accurate. In some cases, chatbots have also created their own inaccurate reference material from which to develop and validate an answer even though it is incorrect or fictional.

To build its database, chatbots retain all entered information for future reference by any user. Since users must input specific information to get the best results, they may need to enter sensitive or confidential information or trade secrets which is added to the chatbot’s database. Depending on the information entered and/or the parameters entered by a future user, companies may find their confidential data available to anyone asking the right questions.

3 Actions to Take Before Using AI

As tools develop and improve, AI will find a place in most workplaces. As you determine how AI will be allowed in your workplace, consider taking these three actions:

  1. Research AI and AI tools: Learn what AI is and how it is incorporated into tools you may use now or may rely on in the future. If you choose to use AI tools, be sure to understand their validity and limitations. For example, if you are going to use virtual analysis of recorded interviews, understand the science behind it, including if the tool has been properly tested to remove implicit biases.
  2. Establish policies and procedures on AI use: Draft a policy to outline when and how AI can and cannot be used. Include clear statements prohibiting discrimination and revealing confidential information. While the policy can be general to cover any AI, develop exact procedures and expectations as you initiate AI tools.
  3. Train employees and managers: As you expand the use of AI tools in your company, train your employees and managers when and how to use them properly and legally. Instruct users on what is and is not allowed as well as expectations such as reviewing and fact-checking all content before releasing it or personalizing a letter to an employee or customer.

My team at The Workplace Advisors will continue to monitor this emerging technology and the related regulatory around its development and use.


Dive in
Related
Article
Where Did All the Workers Go?
Aug 13th, 2024 Views 171
Article
Where Did All the Workers Go?
Aug 13th, 2024 Views 171
Article
State of AI - Report prepared by Printbox
May 22nd, 2024 Views 199
Article
Cober Announces Milestone Acquisition
Sep 18th, 2024 Views 144
Article
Why Cober Invests in Mobile Robotics
Jul 16th, 2024 Views 802