As artificial intelligence (AI) and intelligent automation continue to grow, so do fears about the negative impact they could have on people’s lives. Lets’ look at this example: an algorithm trained to automate the review of resumes for software engineering roles. What if it is trained on a gender-biased data set and ‘learns’ to disqualify candidates based on their gender?
Another example: when you apply for a home loan by phone or face to face, the human loan officer will usually explain to you why your application was rejected. That might not be the case when you apply online and an algorithm decides. Without that transparency, you cannot be sure that the decision was a fair one, nor can you learn how to adjust your credit behavior in the future.
That’s why NICE decided the time has come to tackle these large-scale concerns by practically guiding the RPA industry at large to embrace solid ethical practices, ensuring that the development and deployment of robotics will not negatively impact humans. NICE’s starting point for the code of ethics is its purpose, which is to augment human potential with robotics. The ethical framework comprises principles that NICE embraces in the design and development of their robots. The principles are embedded into the NICE RPA platform and will help ensure this technology is used for the benefit of all. So here are these 5 key principles of the NICE Robo-Ethical Framework:
- Robots must be designed for positive impact.
NICE recognizes that automation could disrupt the labor market on a macro and micro scale. Understanding that technology is changing our world in complex ways, the vendor will create and use robots in ways that make positive contributions. Every project that involves robots should have at least one positive rationale clearly defined with respect to societal, economic, and environmental impact.
- Robots must be designed to be free from biased decision-making.
NICE robots do not consider personal attributes, such as color, religion, sex, gender, age, or any other protected status. NICE technology doesn’t evaluate processes or generate recommendations based on any personal characteristics or group identities. The vendor recommends that training algorithms are evaluated and tested periodically to ensure they are free from bias.
- Robots must be designed to minimize the risk of individual harm.
To avoid harm to people, humans should choose whether and how to delegate decisions to robots. The algorithms, processes, and decisions embedded within robots should be transparent, with the ability to explain conclusions with unambiguous rationale. Accordingly, humans must be able to audit a robot’s processes and decisions. If a robot causes harm to an individual, a human must be able to intervene to redress the system and prevent future offenses. NICE AI doesn’t make any decisions, but rather brings all relevant data to the user so they can make effective and efficient decisions.
- Robots must be trained and function on verified data sources.
NICE Robo-Ethical Framework implies that robots should only act based upon verified data from known and trusted sources. Data sources used for training algorithms should be maintained with the ability to reference the original source. The data used by NICE is self-generated, not informed by any non-verified external source.
- Robots must be designed with governance and control.
Humans should be informed of a system’s capabilities and limitations. A robotics platform should be designed to protect against abuse of power and illegal access by limiting, proactively monitoring, and authenticating any access to the platform and every type of edit action in the system. At any moment, a customer has visibility into every component of the NICE system, with complete governance and control over what each robot should do and what it does.