Rise in industry use of AI technologies in workplaces and future challenges for health and safety: Part 2

Image
steven naylor
Steven Naylor
AI
Industry 4.0
New technologies

In Part 1 of this article I considered potential knock on effects for health and safety in the event that industrial AI systems used in workplaces get it wrong and offered a perspective on key considerations in attempting to mitigate such risks. In Part 2 and Part 3 of the article, I consider other health and safety related challenges associated with the rise in use of industrial AI systems in workplaces and industry 4.0 technologies more generally; these include for example:

  1. the emergence of new health and safety risks stemming from the use of the technologies
  2. the challenges of judging whether any risks to health and safety arising from the use of the technologies are as low as is reasonably practicable
  3. the big data and data security challenges when working with the technologies, particularly those associated with extremely large flows of data and sensitive data
  4. the challenges of dealing with cyber security risks when technologies have connections to the internet
  5. the challenges of investigating serious accidents when the technologies are implicated as factors, particularly those controlled by complex algorithms
  6. the challenges of understanding who is legally liable when complex AI technologies go seriously wrong
  7. the challenge of striking a balance between capitalising on the benefits that AI can bring in this context and ensuring its use is ethical

Challenges 1 to 4 are considered in Part 2, challenges 5 to 7 are considered in Part 3.

industry 4.0 pic 8

industry 4.0 pic 8

The health and safety regulatory system in GB, often described as goal, risk, or performance based, is founded on the principle of allowing employers certain freedoms as to how best they mitigate health and safety risks in order to meet their statutory health and safety responsibilities. HSE, the regulator of health and safety in GB, supports employers legal duties in this regard through the provision of suitably scoped industry guidance, informed in part by learning accrued from investigations carried out following serious health and safety accidents. It monitors that industry is meeting its statutory responsibilities by undertaking a programme of targeted workplace inspections. When serious material breaches in health and safety legislation are observed in workplaces then commensurate prosecution action is taken. Two well recognised merits of the health and safety regulatory system in GB are that it is readily responsive to changes in best practice, and it encourages innovation on the part of employers, including in both respects, how health and safety risks are managed. So when a disruptive technology such as AI emerges which has the potential to change the way industrial operations are delivered for the better, the GB industrial landscape is naturally very receptive to it being adopted across workplaces.

Emerging industry 4.0 and industrial AI technologies deployed in workplaces have the potential to impact health and safety in two main ways, either directly, through their direct use as part of the measures an organisation uses to control health and safety risks within workplaces and the challenge of deploying them effectively; or indirectly, through their use as part of operations more generally, for example to enhance operational efficiency, and the potential for them to introduce new health and safety risks requiring control. Both of these present practical challenges from a health and safety perspective.    

 

New health and safety risks

The emergence of new health and safety risks in the era of industry 4.0 provided the focus of two recent reviews, one commissioned by Lloyd’s Register Foundation published in 2019, and another by the European Agency for Safety and Health at Work published in 2018. Potential new health and safety risks identified by these two pieces of work included musculoskeletal, ergonomic and physical injury risks arising from new human-machine interfaces in workplaces; increased use of mobile devices in industrial settings; longer term health risks resulting from exposure to new hazardous substances generated by new production techniques; the increased risk of work related stress resulting from increases in the pace of work, increased work oversight, decreased human contact; and new process safety risks resulting from the increased automation of processes and reduced human oversight of them.

Judging ALARP

The goal based health and safety regulatory system operated in GB requires the regulator to clearly define the credible bounds of acceptable health and safety performance for a given area of industrial operations. From a legal standpoint, this is recognised to be where residual health and safety risks, following steps taken by an employer to mitigate them, are judged to be at a level that is reasonably practicable. At a practical level, judging whether a risk is ALARP often requires one or a combination of: 1) referral to existing industry standards, 2) benchmarking against recognised industry best practice, or 3) a direct comparison of risk versus sacrifice by way of a formal cost-benefit analysis. Whilst such a system of regulation has many merits, it poses challenges for both the regulator and the regulated when new and emerging technologies, such as AI based technologies, with the potential to give rise to new health and safety risks, start being integrated within industrial process operations. For example, a key challenge, facing both the regulator and regulated in such circumstances, is deciding whether or not any health and safety risks associated with the use of the new technology are as low as is reasonably practicable. This is because, unlike for established technologies, industry standards often do not exist, best practices are often not yet defined, and the implications of processes going wrong are often poorly characterised.

Risk

Risk

Big data and data governance

Big data

Big data

Effective deployment of industry 4.0 and AI technologies in industrial settings share a number of things in common, the effective handling and analysis of data, the effective generation of actionable insights from it, and the effective communication of such insights are all key to it. However, many of these pose significant practical challenges to industry. For example, big data type challenges abound, including the handling and storage of large volumes of multi-format data, working with data streams, and using analytics effectively to make sense of it, including in real time. Using analytics effectively centres on matching the analytic methods used to the sorts of questions wanting to be answered. Different categories of insight potentially of interest range from simple operational reporting to diagnosing problems, predicting outcomes, optimising processes and prescribing actions, and even to actually automating resulting actions taken. In the event that actioning is not automatic, then another key challenge is getting the analytic outputs to the right people, in a timely manner, and communicated in the right way, to enable effectively decision-making. Ethics, privacy, data security and data governance challenges also abound. The latter include specific challenges such as ensuring data is of the right quality, is available when needed, is in a usable format, and any sensitive data is held securely.

 

Cyber security

Cyber Security

Cyber Security

Like electronic devices used in domestic or office settings, electronic devices and component parts of systems operated in industrial settings are increasingly internet enabled in the industry 4.0 era. The key benefit of this is that it facilitates wireless sharing of data between the key components of systems; such systems characterise the so called industrial internet of things. One challenge of having industrial systems that are internet enabled is dealing with the risk of system failures arising from human error, for example, when software is being upgraded, or from a malicious attack to the system from an external source, a so called cyber attack. If such systems are safety critical, which is often the case on major hazard sites, then the potential for such failures to give rise to deleterious health and safety consequences is obviously substantial. In recognition of such risks, the UK's HSE published industry operating guidance in 2017 to help operators of major hazard sites to mitigate the risks of major accidents resulting from breaches in plant cyber security.     

In Part 3 of this article I consider a number of other challenges, namely: 1) the challenges of undertaking accident investigations where failures in complex AI algorithms associated with systems are implicated, 2) the challenges of understanding who is legally liable when complex AI systems go seriously wrong, and 3) the challenge of striking a balance between capitalising on the benefits that AI can bring in this context and ensuring its use is ethical.

 

 

Leave a comment

Read our privacy policy