Reliability and Safety Aspects of Autonomous Systems

With increasing advancement in the field of technology, autonomous systems also have gained much attention. Due to their superior hardware and software capabilities, autonomous vehicles such as drones and driverless cars are becoming increasingly popular. Economists expect that demand for electric vehicles will increase in 2021. In upcoming years, there will be millions of electric automobiles on the road worldwide. There is significant progress in the auto industry in producing self-driving vehicles called autonomous cars, thanks to ever-improving technology. Hyundai, Tesla, and Google are the frontrunners in the development of these vehicles.

As they integrate into our society, it becomes critical to ensure that they are always protected, especially in the face of unplanned and unpredictable situations. The recent rapid rise of autonomous systems has enabled a slew of new services and businesses previously unimaginable. However, the unlocked benefits are accompanied by exceedingly computationally demanding mission- and safety-critical application scenarios.

Autonomous systems make decisions based on their knowledge. As their use increases in all aspects of our everyday life, there will be new questions about the public’s role. For example, the technical team and regulators must work together to ensure a safe and ethical environment. deployment; our expectations of them; and the circumstances

We can and should trust them under these conditions.

Standards of Reliability for Autonomous Systems

Expectations and criteria for safety and reliability for autonomous systems are firmly established in international standards, implicit customer expectations, and, not surprisingly, insurance plans. Autonomous systems are also a new industrial industry that is likely to stick around for a long time. In terms of reliability and safety, international standards are the most precise and authoritative prescribers.

The list of current or under-development standards in this field includes:

  • IEC 61508 (Functional safety of Electrical Electronic Safety-related Systems) is related to industrial fields.
  • ISO 26262 is derived from the previous one and is responsible for the functional safety of autonomous systems.
  • IEC 62279 is a modified form of IEC 61508 for railway-related applications
  • ISO 13849 is a standard related to the safety of machinery control systems responsible for safety functions.
  • AC 25.1309-1A is relevant to system design and analysis. It provides background for issues related to aeroplane system design and analysis.
  • RTCA/DO-254 is a design assurance guidance standard for airborne electronic wave hardware

With time, as the technology evolves so rapidly, the reliability standards also need to be enhanced accordingly. Particularly in the field of AI and Autonomous Systems, the reliability standards are under consideration. Measures are being developed to ensure safety and trust in autonomous systems. It is necessary to ensure safe and successful interactions of AS with people and other methods.

Challenges to Reliability and Safety of Autonomous Systems

Complexity

Any complex system faces the persistent challenge of emergent behaviors that come from interactions within the system. Understanding and managing each of the system’s separate components does not guarantee the system’s safe operation as a whole, and unanticipated emergent features raise the risk of unsafe operation. To deal with this, risk management and make autonomous systems more safe and reliable methods will have to consider the implications and the strategies for prevention and mitigation.

System Oversight

In autonomous systems, they are deployed in complex environments, which increases the number of actors in the system. This increase in number requires overseeing at in much broader level in comparison to previous systems. This result is a significant challenge liability of the autonomous system.

Adversarial Behavior

Individuals may act subversively or aggressively against autonomous systems, especially given their facelessness. Therefore it is necessary to learn from previous technologies.

Testing and Validation

Experimentation results are different for autonomous systems in controlled environments compared to complex environments in which they operate. It is impossible to anticipate all conceivable outcomes that an autonomous system may face when working in the real environment. They may be different situations in the real environment when it fails to work. The variety of events investigated should be risk-based to combat some of this ambiguity.

This entails providing substantially denser coverage in potentially high-risk scenarios, even if they are statistically improbable to occur. A system has two alternatives in the event of an emergency. The first option is for the system to come to a halt to allow for human intervention, while the second option is for the system to make its own choice based on the data available at the time. The ability of human operators to monitor and take control of autonomous systems when they approach their limits or encounter issues must also be confirmed.

Verification

A big challenge in the safety of autonomous systems is verification, especially ones that learn and adapt in response to their surroundings. This expands the scope of decision-making beyond the system that was initially created and tested. The opacity of the process, which implies that the software cannot be validated using traditional methods, might make things even more problematic.

Conclusion

Different machine learning algorithms are used to make autonomous systems more reliable and safe. Autonomous systems are protected by using various machine learning algorithms. Using these ML algorithms, the system learns the pattern of the owner with time. Anything that happens against the owner’s pattern algorithm detects it and alerts the owner and demands the user credentials.

Methods for carrying out mathematical demonstrations of properties of machine learning systems are currently being developed in academia. For high-impact, high-autonomy applications, these technologies will have limitations. As a result, new, transparent techniques for machine learning verification will be required. A shift toward operational verification of systems may be required to handle the ongoing learning from that system and those connected to it. This would entail determining where decisions are made and applying focused verification techniques to those aspects.

These approaches are still in the early stages of development. If this is deemed crucial to the safety of particular autonomous systems, considerations about the deployment timeframe will need to be taken.

home-icon-silhouette remove-button

Connect With Us