What do you need to trust fully autonomous robots?

Autonomous systems promise to facilitate the way humans make decisions in our personal lives, business, enterprise, and even the military. These systems use sensors, data, and machine learning to minimize the cognitive process required by operators to make critical decisions. The continuous monitoring and processing of any structured data created by our surroundings can speed up tasks that used to take hours, days, or even months. But can we really trust their way of thinking? Can we place our lives at the hands of these machines?

Machines have been able to take on mechanical tasks more efficiently than humans ever could. They quickly proved they were more efficient than us and we got used to the idea of letting them take over the simple mundane jobs we don’t necessarily want. But machines are evolving quickly; faster than many of us anticipated. They are intertwining with our lives more each day, and the level of cognitive power is challenging our own. The amount of trust we must place in these machines is increasing every day, and the consequences of their mistakes may be deadly in some cases. Placing our trust in these machines becomes a continuous game of risk management. Do I trust this machine enough to make an actionable decision that might impact my life? Can I trust this machine has considered all scenarios better than I could with the given parameters? Has anything or anyone influenced their programming to purposely influence my thoughts or actions? Has this machine been tested enough to cover all possible use cases applicable to my needs? What does it mean for a human to trust a machine? How much trust is enough?

I don’t think we’ll find a single answer that satisfies everyone; since the very definition of trust might be different for each one of us, so let’s start with the same basis. Merriam-Webster’s dictionary defines trust as:

 Assured reliance on the character, ability, strength, or truth of someone or something. 

So, it seems we must rely on machines to be a part of our decision making process, and we must have some degree of confidence in the ability of that machine to shape our reasoning towards some belief or action.

Now, let’s break down the composition of these autonomous systems. We can classify them into two categories: Autonomy in Motion and Autonomy at Rest. This is a simple distinction between systems with kinetic functions such as cars, drones, or robots and systems with non-kinetic functions that operate virtually in software such as personal assistants, cyber security systems, etc. These systems rely on algorithms to analyze large amounts of data generated by connected components such as radar, cameras, network traffic, Global Positioning Systems (GPS), Electromagnetic Sensing (EMS), etc. All the data is put together in a place where the system can make sense of it and then use machine learning, artificial intelligence, data modeling, and scenario generation, among others algorithms to analyze that data and provide recommendations to operators. The operators then must trust the autonomous systems and make decisions based on their analysis. Each decision brings a risk associated with it and that risk drives the way we build the redundancy of these systems. Critical autonomous applications require a higher level of confidence, so these systems might leverage multiple input sources and fail safes to be able to provide reliable functions. We need less trust in a personal assistant that turns a light on in our house than an autonomous car that drives our family around, since our risk assessment of both functions is vastly different. We think harder before we put our kids in an autonomous car than we do before asking Siri to recommend a song, since the failure of one system doesn’t have the same consequences as the other.

There are three main fields where autonomous system developers can improve to provide trust between the human and the machine:

  1. Human-Computer Interaction (HCI): Engineers have realized that they cannot lock themselves in a room to build products; the integration of engineering and human sciences is necessary to build comprehensive solutions that meet the user’s needs. This is becoming more and more obvious as the complexity of these systems increases. Humans need to understand why decisions are made without having to know implementation details. This is indispensable in critical applications where human lives are at risk. The handoff between the human and the computer must be seamless in the decision process to work.

  2. Cyber Security Systems: Input components may lie outside the boundaries of the physical system. Distributed computing systems leverage the computational power of the cloud infrastructure to perform analytics of large amounts of data. The data must be validated against a valid predefined set in order to be classified depending on the system needs. Autonomous systems still have the same cyber security needs as other components—malware detection, Intrusion Detection Systems (IDS), Intrusion Protection Systems (IPS), Deception Technologies, etc. These types of systems still need to be in place, although two differentiations can be made:

a. Cyber security systems need to become smarter by using technologies such as machine learning and Artificial Intelligence (AI) to augment their capabilities in order to keep up with the different forms of attacks their adversaries bring. Techniques such as signature base detection are quickly becoming obsolete thanks to polymorphic attacks that can mutate based on their targets. Systems have to change from being reactive to actively predicting threats, and from being static to dynamic.

b. Autonomous sensors bring new challenges to the cyber security space. Machine learning algorithms depend on the integrity of the data to provide reliable and resilient solutions; if the data is compromised then the whole system cannot be trusted. A lot of research is being done to make sure that AI algorithms cannot be purposely fooled; attacks such as the “adversarial image” can be potentially fatal in critical systems.

  1. Testing Tools and Methodologies: Software development has been changing in recent years due to the increase of computational power available in current systems. Software is becoming dynamic and the predictability of such systems is becoming non-existent. This makes the testing of autonomous systems extremely difficult. Developers cannot use the conventional testing methodologies to test their algorithms, they must rely on computational models and scenario generation to test the various use cases users may encounter. On average, current commercial software only tests about 6 percent of the lines of code written, that is unit tests and integration tests with a well-defined set of parameters. Autonomous systems are different from these algorithms because the amount of data they need to learn grows exponentially from static software implementations. It’s hard to predict all scenarios the autonomous system will encounter out in the wild, and the complexity to generate scenario models increases with the size and criticality of the system.

Trust will define how much humans rely on these emerging technologies. The lack of trust will constrain their abilities and usability. We build systems with great capabilities but we often constrain them due to security or technical concerns. This is why we must strive to find methodologies that enable us to build secure, reliable, and resilient systems that humans can truly use without questioning their process or reasoning.

submitted by /u/datobon
[link] [comments]

Leave a Reply

Your email address will not be published. Required fields are marked *