TOKYO -- With artificial intelligence making huge strides and proliferating rapidly, the possibilities and benefits seem endless. But the development of AI has already presented numerous questions surrounding the ethical aspect of its use.
The advent of the AI age has already raised concerns over certain groups being subject to unfair treatment, arbitrary infringements on basic rights, as well as job losses and other problems in the workplace.
A casino in Manila provides a case in point. A South Korean customer at the roulette table commented to the croupier that he would not have enough money to fly home at the rate he was losing. "Looks like you'll just have to swim," she quipped in reply.
Neither had any idea they were both being watched by an array of hidden cameras installed on the ceiling, with just 50 cm separating each one from the next.
But these are no ordinary security cameras. They form part of a state-of-the-art system designed not just to catch people cheating, but spot people who might be likely to cheat.
Suspicious gamblers are identified based on the tiniest of facial movements and comparing them with image data on about 100,000 drug addicts, shoplifters and other people with criminal records. The system picks out around 10 individuals a day.
Similar surveillance systems have already been introduced at many airports and event sites around the world.
This, however, has led to serious ethical problems. Under one system used in the U.S., African Americans are statistically more likely to be deemed suspicious than other ethnic groups. By basing its analysis on past data, the system's logical and "objective" analysis has developed the kind of prejudice society has been trying to eradicate.
"There is a theory that criminals are born, not made. Any system based on such thinking will cause serious violations of human rights," said Fumio Shimpo, a professor at Japan's Keio University and an expert on AI-related legislation.
Many innocent people could face being suddenly designated would-be criminals by AI and shunned by their peers. This poses difficult questions about just what an ideal society is, regardless of any potential decline in crime rates.
In February, Japan's Hitachi Solutions will start selling a new AI-based personnel management system designed to identify employees who are likely to take leave of absence.
The system analyses people based on factors such as performance and overtime hours worked. It issues a warning to managers to help them prevent absences by diversifying the employees' workload.
This posed a problem for Shigeki Yamamoto, the official who led the project. Being singled out as a likely absentee by the system could influence how an employee is viewed by their superiors, and lead to unfair treatment. But coming up with a way to prevent such a situation has not been easy.
Contracts with clients purchasing the system will include a provision specifying that it "should be used in a manner that does not put individuals at a disadvantage."
The company will provide information on the numbers of people they can expect to be taking leave, although doing so will make the new system less effective.
Regulations have become necessary in all areas regarding how to use AI. The world of "shogi," or Japanese chess, is lagging behind.
Last year, Hiroyuki Miura, a top-level professional shogi player, was suspected of cheating during official games by using shogi software on his smartphone. An inquiry later cleared Miura, having found no evidence of any wrongdoing.
Koji Tanigawa, chairman of the Japan Shogi Association, resigned over the scandal. He said, "While software has advanced rapidly, we were slow to establish rules."
Toshihito Mitsui, president of the Japan Para Athletics, also stressed the need to "set regulations" ahead of the 2020 Tokyo Olympics and Paralympics.
There are currently no particular regulations on artificial legs and wheelchairs used by athletes. But those using AI are likely to be the ones to continue breaking records.
Even if AI itself uses a completely unbiased approach, the way it is used can make it hugely unfair. The question now is just what kind of society do we want to create using AI.