FIU Business Now Magazine - Fall 2020
 
THE MAGAZINE OF FLORIDA INTERNATIONAL UNIVERSITY'S COLLEGE OF BUSINESS
 
The AI Revolution: How Artificial Intelligence is Driving Better Business Decision-Making

The AI Revolution: How Artificial Intelligence is Driving Better Business Decision-Making

From "smart" security cameras and smarter decisions in healthcare to scheduling meetings and saving money, artificial intelligence (AI) is changing the course of what is possible – and potentially profitable – in the business world. Machine-learning-enabled data analysis can help humans get routine processes done more efficiently and help consumers make better decisions.

Artificial intelligence at its highest level conjures up a robot that's programmed to think and respond as a human does. Think IBM's Watson, which took on all competitors in a "Jeopardy!" TV challenge and continues to solve complex diagnostic healthcare challenges.

But for all practical purposes, that humanoid is still far off. Today, AI includes a wealth of functions that, to a greater or lesser degree, infuse human intelligence to improve on the process of getting things done and making better decisions. And as the technology develops, it's increasingly accessible to automate more and more business functions – and can be applied by a wider audience. "AI is not just for the data sciences," said Karlene Cousins, chair of FIU Business' Department of Information Systems and Business Analytics. "Every business professional should have an understanding of how to use AI to add value to their business."

The most sophisticated aspects of AI are showing up in large consumer companies (think Amazon), as well as hospitals and doctors' offices, integral tools in diagnosis and remote treatment, particularly in the COVID-19 era. Less-sophisticated examples of AI are automating processes in virtually every business, creating the potential for greater process accuracy and cost savings.

It's a new world, with new possibilities – and, many experts say, a powerful tool that should be employed with a level of understanding and caution.

ImplementingAI Solutions

How can a business best understand, leverage and incorporate the power of AI? Much depends on the task at hand and the resources and the goals of the organization, said Karlene Cousins, chair of the Department of Information Systems and Business Analytics at FIU Business.

While COVID-19 has opened the door to the wide potential of AI in the healthcare field, most businesses will find needs that are far less complex.

"First, look at low-hanging fruit, applications that are easy and cost-effective to implement," she said. A 2018 Harvard University survey found that most businesses are taking this route, using more accessible technology that's less expensive.

"You shouldn't be using AI for AI's sake," Cousins said. "It should be strategically aligned to your business strategy." Above all, she said, "businesses need to keep up."

Choosing the Right AI Tool

The most popular category of AI is robotic process automation. The robotic process automation tool observes the user in process and informs the developers how to do it. The bot's goal is to parrot the clicks and steps the user executes, with the goal of mimicking routine, intensive demanding processes and automating them.

The technology has a broad spectrum of potential applications. Law firms can use natural language processing to automate the processing of contracts through extracting provisions with the use of AI. Conversational AI solutions could learn (through training data) how a user responds to a customer, look for key words and signals in communications, and follow that format to interact with customers directly.

"Any business that wants to make its processes more efficient can use robotic process automation and AI," said Rod Hernandez (MSIS '11, BBA '07), a senior manager and HR technology strategist at a leading management consulting firm.

AI can help businesses derive relevant knowledge through analysis of large data sets, a deep-learning function that can yield insights beyond traditional methods of data analysis. "Think of it as analytics on steroids," Cousins said.

A second, more sophisticated use of AI: intelligent automation, which uses natural language recognition and machine learning. Getting a human-like response to phone inquiries is now a norm, and providing correct, thoughtful responses in the early stages of customer engagement can create enormous efficiencies for a business. Using natural language engagement can provide customer service that bots will improve over time. This might include a digital assistant that can pull together various pieces of information to shorten the length of a phone call.

"The finance industry is using this a lot," Hernandez said. "It's an important tool for vendors looking for purchase orders or invoice payment information. This is the kind of digital assistant that can improve customer operations."

A third piece, cognitive automation, introduces an additional piece of the human element, allowing it to introduce additional machine learning and language to develop better and better insights and predictions over time, Cousins said. This is how a retailer might look at buying patterns and develop predictive insights to fuel future buying, an insight that extends to developing a more efficient supply chain process.

The email or text you might get from your credit card company after an unusual charge in a new location is another product of the deep-learning process. Major accounting firms have brought AI into their auditing functions to improve and develop new insights. The supply of products of services throughout the globe, Cousins said, can be informed by the application of analytic tools.

The ultimate example, one Hernandez labels "futuristic," is a level of artificial intelligence that mimics human intelligence, in which a computer would be able to perform a task that would normally require human intelligence. It requires a sophisticated level of programming and a high level of expense. In terms of its adaptability, Hernandez said, "we're not there yet."

Breakthroughs in AI: COVID-19 and the World of Healthcare

Experts agree that the pandemic, and the demands it has placed on healthcare, will lead the medical field to drive many new AI advances in the near future.

Intelligent cameras using a predictive AI model can detect security hazards such as firearms. In other contexts, such as voice analysis, AI can detect user emotions, explained Lina Bouayad, an associate professor of information systems and business analytics at FIU Business who has conducted extensive research on information assurance and artificial intelligence.

""It's an exciting time in the use of technologies – 10 years of digital progress has been compressed into the last six months with a lot of innovations."

Attila Hertelendy, Assistant Teaching Professor, Information Systems and Business Analytics

"The model not only predicts something that will happen, but how to respond," she said. "It's acting on its own. We're trying to let the machine handle easy tasks such as understanding and responding to user complaints through chatbots."

AI can be very powerful in contact tracing, medical diagnoses and data analysis. Bouayad explained that AI can trace the 14-day trajectory of an infected person and alert those with whom this person has had contact. During the pandemic, healthcare providers and researchers are also using AI to identify disparities in infection rates among racial and ethnic minority groups, what's driving those differences and the geographic areas most impacted.

Applications of AI have been used in the identification of positive COVID-19 cases from X-ray images, the detection of health abnormalities from sensor readings and the automation of patient responses through intelligent chatbots. "These tools have helped augment provider effectiveness, improve health outcomes and enhance patient satisfaction," Bouayad said.

Hospitals have analytics centers that can assess patients based on risk using predictive data for readmissions. "That's an area where AI is working well," Bouayad said. This newly available, rich level of data has facilitated continuous care through telemedicine when the patient leaves the hospital. Yet even in the hospital itself, AI tools give nurses the data they need in specialized devices to monitor the patient, an increasingly important capability in the era of COVID-19 when physical contact with the patient must be minimized.

Sensors on patients, or a setup in their homes, allow doctors and nurses to monitor heart rate, glucose level and other conditions. In turn, chatbots can speak to patients – ask and answer questions or remind them of medications or treatments. Intelligent cameras analyze data in real time and give an alert if the patient is showing certain signs so doctors can have an idea of what the patient is suffering from.

"AI is helping take better care of patients, improve productivity for practitioners and enhance their interaction with patients," said Bouayad. "This existed before, but adoption rates were very slow. The pandemic has accelerated that."

Attila Hertelendy, an assistant teaching professor of information systems and data analytics at FIU Business who studies healthcare technology, agrees that real- time monitoring improves patient care, but warns that it's key for medical practitioners to follow up on factors including medication and diet compliance.

"For instance, if Ms. Smith doesn't show up in 48 hours, you could have used predictive analytics to send alerts, he said. "People might think this is intrusive in terms of privacy, but the truth is they're wearing these devices already."

In the case of COVID-19, he pointed out that having data in a large shared repository would have pinpointed spiking cases and identified where the spread originated, allowing for more effective decisions. Nevertheless, in the absence of national and state databases, he noted, COVID-19 has opened eyes to the transformational power of artificial intelligence.

"It's an exciting time in the use of technologies. Ten years of digital progress has been compressed into the last six months with a lot of innovations," said Hertelendy. "We'll ponder on this post-COVID to see if it's beneficial to society or whether our controls were too lax."

Averting AI's Dangers

AI can help make betterinformed decisions and deliver increasingly useful data, but it can also introduce biases and deliver skewed results that have real-world implications and outcomes.

Bias, say experts, is a huge problem that could impact medical care, racial inequality, criminal justice and job interviews. It also could also raise issues of privacy, security and ethics.

How does it happen? Algorithms are not perfect, at least not in their current state. They learn from data; the data humans give them. They are therefore likely to inherit human biases available in the data.

"You give the machine a lot of examples and let it learn by itself," said Lina Bouayad, associate professor of information systems and business analytics at FIU Business. "If the data is biased, then the results will be biased."

For instance, she noted, if the data doesn't contain enough observations about minorities, "the computer isn't going to learn how to make accurate predictions about these groups of people."

Biases may arise if the algorithm wasn't given data that is representative of the sector that it will be used in. Groups may be overrepresented or underrepresented. Another source of bias can be in the design of the algorithm itself.

Rod Hernandez (MSIS '11, BBA '07), a senior manager and HR technology strategist at a leading management consulting firm, noted that many AI-based job searches will favor candidates whose profiles mimic those who had historically succeeded at a job – young white males who attended Ivy League schools, for example – and wind up perpetuating bias in hiring. It's a particularly difficult problem in areas like Wall Street, Silicon Valley and the leading management consulting firms, where hiring patterns are deeply entrenched.

AI models trained on patient data have recently raised bias concerns. In one case, smart watches measuring users' heart rates were delivering inaccurate information about Black users. Darker skin, which has more melanin, blocks the green light used by pulse oximeter sensors, making it harder to get an accurate reading. Research showed inaccurate PPG heart rate measurements occur up to 15 percent more frequently in dark skin as compared to light skin.

"The problem is still there," said Bouayad. "We need to educate doctors to determine what's the probability of inaccurate information and how to deal with errors."

On the jobs front, companies have turned to AI for recruiting. Intelligent cameras and face-scanning algorithms, games and question-and-answer features, help determine whether an applicant's personality is a good fit for the job but could also screen out applicants as well.

The best way to detect bias in AI is by testing the algorithm being used in the context in which it will be applied. However, assessing its accuracy isn't easy, Bouayad warned.

"Test the tool internally before deployment; code-check any apps," she said. "Determine if the issues that will be tackled were included in the original data set that the algorithm trained on."

In early 2020, Illinois became the first state to enact the Artificial Intelligence Video Interview Act. It requires employers to notify applicants that artificial intelligence technologies may be used to analyze the video interview; explain how the artificial intelligence technologies work and will be used; obtain consent from the applicant to be evaluated by artificial intelligence technologies and delete the video upon request from the applicant within 30 days.

To address AI bias in the recruiting field, Hernandez said, technology solutions designed to work to root out bias in recruiting platforms have already been developed and are being improved. But to assure a proactive approach, it's also an issue Hernandez and others have raised in discussions with leadership circles across the Fortune 500.

Hernandez stressed the need "to be aware of AI bias, and put thought into preventing it. We need to do it for ourselves and our clients," he said.

Attila Hertelendy, assistant teaching professor of information systems and business analytics, explained that technology can sometimes become a barrier to an applicant getting a job. "When the algorithms don't identify the magic words in your resume, your CV doesn't progress to a human," he said. "If qualified candidates' resumes don't pass to a human, that's bad AI."

Privacy concerns regarding AI's role in collecting consumers' data, and often reselling it in different contexts, are at an all-time high.

"We give up our data every time we click ‘accept,'" Hertelendy said. "Information privacy is quite optimistic; 99% of the time we never read the privacy statement when we download an app or program, we just click ‘accept.'"

He added that one of the things that caused alarm was major company executives, like Bill Gates and Elon Musk, were on the record saying they had concerns about AI's potential for misuse wherein it could become a societal gamechanger if it's not monitored or part of the public discourse. "It could be very intrusive and dangerous."