TOKYO/BRUSSELS -- The European Union is seeking transparency in the use of artificial intelligence and, under ethical guidelines currently being drafted, aims to require companies to explain the decision-making behind the AI they use.
With AI increasingly employed in providing loans and hiring workers, concerns are growing about the technology using biased data, such as with regard to race or gender.
Under the draft guidelines due out soon from an expert council, companies would be required to spell out how AI reached decisions. The panel will also propose creating a framework for disclosing what kind of data the software was trained on, in addition to setting up an organization for overseeing the ethical operation of AI and establishing an ethics certification program.
Based on the council's proposals, the European Commission will draw up concrete guidelines by year's end.
While this may be early days in the EU’s development of AI standards, the move highlights the interests in Brussels in taking a lead in shaping global rules. The initiative comes after the EU’s success in devising tough data privacy regulations that are increasingly being followed by other countries including in Asia.
The new rules will be a challenge for companies involved in AI. But Chinese tech companies such as Tencent Holdings, Alibaba Group Holding and Sina, the operator of the Twitter-like Sina Weibo service, have been adapting overseas operations to new regulations.
Often assigning representatives to negotiate with local authorities, these Chinese tech companies are expected to swiftly tailor European operations to the new requirements.
The EU is stepping up efforts to tighten regulations related to the data economy, in which companies compete to amass and weaponize huge volumes of information. This May, the EU implemented the General Data Protection Regulation, or GDPR, which is designed to strengthen individual privacy protections. It has also upbraided U.S. information technology giants like Google and Facebook for alleged violations of competition laws.
The expert council also suggests requiring companies to take out liability insurance, with an eye to hypothetical cases such as housekeeping robots causing accidents due to AI defects.
The final policy is not expected to include penalties for violators. But EU member countries may use it as a basis for drafting their own laws. Companies found to be using AI inappropriately could face a backlash from investors and consumers.
AI is rapidly spreading, with lenders using it to rate applicants on personal information and purchase history to determine the conditions of a loan. More employers are using AI to scan resumes and decide whether to hire candidates.
But humans feed AI the information it uses to build decision-making standards, and concerns are growing in Europe and the U.S. that if the data contains biases against certain types of people, those biases could be reflected in its decisions -- denying home loans to people of certain races, or hiring fewer people of one gender, for instance. The EU seeks greater transparency to combat the tendency for AI to be treated as a "black box," with its processes rarely revealed to outsiders, making it harder to detect and fix problems.
So far, companies like Google have crafted their own ethics rules for AI. The EU's policy looks to be the first such rules to be created on a broad scale. The body will likely take advantage of technological advances such as software currently under development to chart an AI's decision-making process.